What Storify Shutting Down Means to Us

The Storify logo.

You may have heard that popular social media story platform Storify will be shutting down on May 16, 2018. Open to the public since 2011, it has hosted everything from academic conference tweet round-ups to “Dear David”, the ongoing saga of Buzzfeed writer Adam Ellis and the ghost that haunts his apartment. So it shocked long-time users in December when Storify suddenly announced that it would be shutting down in just a few months.

Already, Storify is no longer allowing new accounts to be created, and by May 1st, users won’t be able to create new stories. On May 16th, everything disappears. Storify will continue on with Storify 2, a feature of Livefyre, but will require you to purchase a Livefyre license for access. But the fact is that many users cannot or will not pay for Livefyre. Essentially, Storify will cease to exist on May 16th to most people.

So… what does this mean?

Of course, it means that you need to export anything that you have stored on Storify and want to save. (They provide instructions for exporting content on their shutting down FAQ.) More than that, however, we need to talk about how we are relying on services to archive our materials online and how that is a dangerous long-term preservation strategy.

The fact is, free Internet services can change in an instant, and without consulting their user base. As we have seen with Storify — as well as other services like Google Reader — what seems permanent can disappear quickly. When it comes to long-term digital preservation, we cannot solely depend on them as our only means of preservation.

That is not to say that we cannot use free digital tools like Storify. Storify was a great way to collect Tweets, present stories, and get information out to the public. And if you or your institution did not have the funds or support to create a long-term preservation plan, Storify was a great stop-gap until then. But digital preservation is a marathon, not a race, and we will need to continue to find new, innovative ways to ensure that digital material remains accessible.

When I heard Storify was shutting down, I went to our Scholarly Commons intern Matt Pitchford, whose research is on social media and who has a real stake in maintaining digital preservation, for his take on the issue. (You can read about Matt’s research here and here.) Here’s what Matt had to say:

Thinking about [Storify shutting down] from a preservation perspective, I think it reinforces the need to develop better archival tools along two dimensions: first, along the lines of navigating the huge amounts of data and information online (like how the Library of Congress has that huge Twitter archive, but no means to access it, and which they recently announced they will stop adding to). Just having all of Storify’s data wouldn’t make it navigable. Second, that archival tools need to be able to “get back” to older forms of data. There is no such thing as a “universally constant” medium. PDFs, twitter, Facebook posts, or word documents all may disappear over time too, despite how important they seem to our lives right now. Floppy disks, older computer games or programs, and even recently CDs, aren’t “accessible” in the way they used to be. I think the same is eventually going to be true of social media.
Matt brings up some great issues here. Storify shutting down could simply be a harbinger of more change online. Social media spaces come and go (who else remembers MySpace and LiveJournal?), and even the nature of posts change (who else remembers when Tweets were just 140 characters?). As archivists, librarians, and scholars, we will have to adopt, adapt, and think quickly in order to stay ahead of forces that are out of our control.
And most importantly, we’ll have to save backups of everything we do.

Open Source Tools for Social Media Analysis

Photograph of a person holding an iPhone with various social media icons.

This post was guest authored by Kayla Abner.


Interested in social media analytics, but don’t want to shell out the bucks to get started? There are a few open source tools you can use to dabble in this field, and some even integrate data visualization. Recently, we at the Scholarly Commons tested a few of these tools, and as expected, each one has strengths and weaknesses. For our exploration, we exclusively analyzed Twitter data.

NodeXL

NodeXL’s graph for #halloween (2,000 tweets)

tl;dr: Light system footprint and provides some interesting data visualization options. Useful if you don’t have a pre-existing data set, but the one generated here is fairly small.

NodeXL is essentially a complex Excel template (it’s classified as a Microsoft Office customization), which means it doesn’t take up a lot of space on your hard drive. It does have advantages; it’s easy to use, only requiring a simple search to retrieve tweets for you to analyze. However, its capabilities for large-scale analysis are limited; the user is restricted to retrieving the most recent 2,000 tweets. For example, searching Twitter for #halloween imported 2,000 tweets, every single one from the date of this writing. It is worth mentioning that there is a fancy, paid version that will expand your limit to 18,000, the maximum allowed by Twitter’s API, or 7 to 8 days ago, whichever comes first. Even then, you cannot restrict your data retrieval by date. NodeXL is a tool that would mostly be most successful in pulling recent social media data. In addition, if you want to study something besides Twitter, you will have to pay to get any other type of dataset, i.e., Facebook, Youtube, Flickr.

Strengths: Good for a beginner, differentiates between Mentions/Retweets and original Tweets, provides a dataset, some light data visualization tools, offers Help hints on hover

Weaknesses: 2,000 Tweet limit, free version restricted to Twitter Search Network

TAGS

TAGSExplorer’s data graph (2,902 tweets). It must mean something…

tl;dr: Add-on for Google Sheets, giving it a light system footprint as well. Higher restriction for number of tweets. TAGS has the added benefit of automated data retrieval, so you can track trends over time. Data visualization tool in beta, needs more development.

TAGS is another complex spreadsheet template, this time created for use with Google Sheets. TAGS does not have a paid version with more social media options; it can only be used for Twitter analysis. However, it does not have the same tweet retrieval limit as NodeXL. The only limit is 18,000 or seven days ago, which is dictated by Twitter’s Terms of Service, not the creators of this tool. My same search for #halloween with a limit set at 10,000 retrieved 9,902 tweets within the past seven days.

TAGS also offers a data visualization tool, TAGSExplorer, that is promising but still needs work to realize its potential. As it stands now in beta mode, even a dataset of 2,000 records puts so much strain on the program that it cannot keep up with the user. It can be used with smaller datasets, but still needs work. It does offer a few interesting additional analysis parameters that NodeXL lacked, such as ability to see Top Tweeters and Top Hashtags, which works better than the graph.

Image of hashtag searchThese graphs have meaning!

Strengths: More data fields, such as the user’s follower and friend count, location, and language (if available), better advanced search (Boolean capabilities, restrict by date or follower count), automated data retrieval

Weaknesses: data visualization tool needs work

Hydrator

Simple interface for Documenting the Now’s Hydrator

tl;dr: A tool used for “re-hydrating” tweet IDs into full tweets, to comply with Twitter’s Terms of Service. Not used for data analysis; useful for retrieving large datasets. Limited to datasets already available.

Documenting the Now, a group focused on collecting and preserving digital content, created the Hydrator tool to comply with Twitter’s Terms of Service. Download and distribution of full tweets to third parties is not allowed, but distribution of tweet IDs is allowed. The organization manages a Tweet Catalog with files that can be downloaded and run through the Hydrator to view the full Tweet. Researchers are also invited to submit their own dataset of Tweet IDs, but this requires use of other software to download them. This tool does not offer any data visualization, but is useful for studying and sharing large datasets (the file for the 115th US Congress contains 1,430,133 tweets!). Researchers are limited to what has already been collected, but multiple organizations provide publicly downloadable tweet ID datasets, such as Harvard’s Dataverse. Note that the rate of hydration is also limited by Twitter’s API, and the Hydrator tool manages that for you. Some of these datasets contain millions of tweet IDs, and will take days to be transformed into full tweets.

Strengths: Provides full tweets for analysis, straightforward interface

Weaknesses: No data analysis tools

Crimson Hexagon

If you’re looking for more robust analytics tools, Crimson Hexagon is a data analytics platform that specializes in social media. Not limited to Twitter, it can retrieve data from Facebook, Instagram, Youtube, and basically any other online source, like blogs or forums. The company has a partnership with Twitter and pays for greater access to their data, giving the researcher higher download limits and a longer time range than they would receive from either NodeXL or TAGS. One can access tweets starting from Twitter’s inception, but these features cost money! The University of Illinois at Urbana-Champaign is one such entity paying for this platform, so researchers affiliated with our university can request access. One of the Scholarly Commons interns, Matt Pitchford, uses this tool in his research on Twitter response to terrorism.

Whether you’re an experienced text analyst or just want to play around, these open source tools are worth considering for different uses, all without you spending a dime.

If you’d like to know more, researcher Rebekah K. Tromble recently gave a lecture at the Data Scientist Training for Librarians (DST4L) conference regarding how different (paid) platforms influence or bias analyses of social media data. As you start a real project analyzing social media, you’ll want to know how the data you have gathered may be limited to adjust your analysis accordingly.

Preparing Your Data for Topic Modeling

In keeping with my series of blog posts on my research project, this post is about how to prepare your data for input into a topic modeling package. I used Twitter data in my project, which is relatively sparse at only 140 characters per tweet, but the principles can be applied to any document or set of documents that you want to analyze.

Topic Models:

Topic models work by identifying and grouping words that co-occur into “topics.” As David Blei writes, Latent Dirichlet allocation (LDA) topic modeling makes two fundamental assumptions: “(1) There are a fixed number of patterns of word use, groups of terms that tend to occur together in documents. Call them topics. (2) Each document in the corpus exhibits the topics to varying degree. For example, suppose two of the topics are politics and film. LDA will represent a book like James E. Combs and Sara T. Combs’ Film Propaganda and American Politics: An Analysis and Filmography as partly about politics and partly about film.”

Topic models do not have any actual semantic knowledge of the words, and so do not “read” the sentence. Instead, topic models use math. The tokens/words that tend to co-occur are statistically likely to be related to one another. However, that also means that the model is susceptible to “noise,” or falsely identifying patterns of cooccurrence if non-important but highly-repeated terms are used. As with most computational methods, “garbage in, garbage out.”

In order to make sure that the topic model is identifying interesting or important patterns instead of noise, I had to accomplish the following pre-processing or “cleaning” steps.

  • First, I removed the punctuation marks, like “,.;:?!”. Without this step, commas started showing up in all of my results. Since they didn’t add to the meaning of the text, they were not necessary to analyze.
  • Second, I removed the stop-words, like “I,” “and,” and “the,” because those words are so common in any English sentence that they tend to be over-represented in the results. Many of my tweets were emotional responses, so many authors wrote in the first person. This tended to skew my results, although you should be careful about what stop words you remove. Simply removing stop-words without checking them first means that you can accidentally filter out important data.
  • Finally, I removed too common words that were uniquely present in my data. For example, many of my tweets were retweets and therefore contained the word “rt.” I also ended up removing mentions to other authors because highly retweeted texts tended to mean that I was getting Twitter user handles as significant words in my results.

Cleaning the Data:

My original data set was 10 Excel files of 10,000 tweets each. In order to clean and standardize all these data points, as well as combining my file into one single document, I used OpenRefine. OpenRefine is a powerful tool, and it makes it easy to work with all your data at once, even if it is a large number of entries. I uploaded all of my datasets, then performed some quick cleaning available under the “Common Transformations” option under the triangle dropdown at the head of each column: I changed everything to lowercase, unescaped HTML characters (to make sure that I didn’t get errors when trying to run it in Python), and removed extra white spaces between words.

OpenRefine also lets you use regular expressions, which is a kind of search tool for finding specific strings of characters inside other text. This allowed me to remove punctuation, hashtags, and author mentions by running a find and replace command.

  • Remove punctuation: grel:value.replace(/(\p{P}(?<!’)(?<!-))/, “”)
    • Any punctuation character is removed.
  • Remove users: grel:value.replace(/(@\S*)/, “”)
    • Any string that begins with an @ is removed. It ends at the space following the word.
  • Remove hashtags: grel:value.replace(/(#\S*)/,””)
    • Any string that begins with a # is removed. It ends at the space following the word.

Regular expressions, commonly abbreviated as “regex,” can take a little getting used to in order to understand how they work. Fortunately, OpenRefine itself has some solid documentation on the subject, and I also found this cheatsheet valuable as I was trying to get it work. If you want to create your own regex search strings, regex101.com has a tool that lets you test your expression before you actually deploy it in OpenRefine.

After downloading the entire data set as a Comma Separated Value (.csv) file, I then used the Natural Language ToolKit (NLTK) for Python to remove stop-words. The code itself can be found here, but I first saved the content of the tweets as a single text file, and then I told NLTK to go over every line of the document and remove words that are in its common stop word dictionary. The output is then saved in another text file, which is ready to be fed into a topic modeling package, such as MALLET.

At the end of all these cleaning steps, my resulting data is essentially composed of unique nouns and verbs, so, for example, @Phoenix_Rises13’s tweet “rt @drlawyercop since sensible, national gun control is a steep climb, how about we just start with orlando? #guncontrolnow” becomes instead “since sensible national gun control steep climb start orlando.” This means that the topic modeling will be more focused on the particular words present in each tweet, rather than commonalities of the English language.

Now my data is cleaned from any additional noise, and it is ready to be input into a topic modeling program.

Interested in working with topic models? There are two Savvy Researcher topic modeling workshops, on December 6 and December 8, that focus on the theory and practice of using topic models to answer questions in the humanities. I hope to see you there!

Studying Rhetorical Responses to Terrorism on Twitter

As a part of my internship at the Scholarly Commons, I’m going to do a series of posts describing the tools and methodologies that I’ve used in order to work on my dissertation project. This write-up serves as an introduction to my project, it’s larger goals, and tools that I use to start working with my data.

The Dissertation Project

In general, my dissertation draws on computational methodologies to account for the digital circulation and fragmentation of political movement texts in new media environments. In particular, I will examine the rhetorical responses on Twitter to three terrorist attacks in the U.S.: the 2013 Boston Marathon Bombing, the 2015 San Bernardino Shooting, and the 2016 Orlando Nightclub shooting. I begin with the idea that terrorism is a kind of message directed at an audience, and I am interested in how digital audiences in the U.S. come to understand, make meaning of, and navigate uncertainty following a terrorist attack. I am interested in the patterns of narratives, community construction, and expressions of affect that characterize terrorism as a social media phenomenon.

I am interested in the following questions: What methods might rhetorical scholars use to better understand the vast numbers of texts, posts, and “tweets” that make up our social media? How do digital audiences construct meanings in light of terrorist attacks? How does the interwoven agency and materiality of digital spaces influence forms of rhetorical action, such as invention and style? In order to better address such challenges, I turn to the tools and techniques of the Digital Humanities as a computational modes of analysis to examine the digitally circulated rhetoric surrounding terror events. Investigation of this rhetoric using topic models will help scholars to understand not only particular aspects of terrorism as a social media phenomenon, but also to better see the ways that community and identity are themselves formed amid digitally circulated texts.

At the beginning of this project, I had no experience working with textual data, so the following posts represent a cleaned and edited version of the learning process I went through. There was a lot of mess and exploration involved, but that meant I’ve come to understand a lot more.

Gathering The Tools

I use a Mac, so accessing the command line is as simple as firing up the Terminal.App. Windows users have to do a bit more work in order to get all these tools, but plenty of tutorials can be found with a quick search.

Python (Anaconda)
The first big choice was to learn how to code in R or Python. I’d heard that Python was better for text and R was better for statistical work, but it seems that it mostly comes down to personal preference as you can find people doing both in either language. Both R and Python have a bit of a learning curve, but a quick search for topic modeling in Python gave me a ton of useful results, so I chose to start there.

Anaconda is a package management system for the Python languages. What’s great about Anaconda is not only that it has a robust management system (so I can easily download the tools and libraries that I need without having to worry about dependencies or other errors), but also that it encourages the creation of “environments” for you to work in. This means that I can make mistakes or install and uninstall packages without having to worry about messing up my overall system or my other environments.

Instructions for downloading Anaconda can be found here, and I found this cheat-sheet very useful in setting up my initial environments. Python has a ton of documentation, so these pages are useful, and there are plenty of tutorials online. Each environment comes with a few default packages, and I quickly added some toolkits for processing text and plotting graphs.

Confirming the Conda installation in Terminal, activating an environment, and listing the installed packages.

StackOverflow
Lots of people working with Python have the same problems or issues that I did. Whenever my code encountered an error, or when I didn’t know how to do something like write to a .txt file, searching StackOverflow usually got me on the right track. Most answers link to the Python documentation that relates to the question, so not only did I fix what was wrong but I also learned why.

GitHub
Sometimes scholars put their code on GitHub for sharing, advancing research, and confirming their findings. I found code on here that is for topic modeling in Python, as well as setting up repositories for my own work. Using GitHub is a useful version control system, so it also meant that I never “lost” old code and could track changes over time.

Programming Historian
This is a site for scholars interested in learning how to use tools for Digital Humanities work. There are some great tutorials here on a range of topics, including how to set up and use Python. It’s approachable and does a good job of covering everything you need to know.

These tools, taken together, form the basis of my workspace for dealing with my data. Upcoming topics will cover Data Collection, Cleaning the Data, Topic Models, and Graphing the Results.

Introducing Matthew Pitchford, Scholarly Commons Intern

Matthew Pitchford, Scholarly Commons Intern

This latest installment of our series of interviews with Scholarly Commons experts and affiliates features Matthew Pitchford, Scholarly Commons Intern. Matt started working at the Scholarly Commons in August 2017.


What is your background education and work experience?

I would call myself a rhetorician. I earned my Bachelor’s degree from Willamette University in Oregon before coming to U of I for my Master’s in Communication, which I received in 2014. I am currently working toward my PhD in Communication. I’ve taught introduction to public speaking and writing, argumentation, and communicating public policy. The courses I teach tend to focus on thinking about how rhetoric intersects with contemporary political discourse and how people use rhetoric to make arguments in that arena.

What led you to this field?

My interest in communication began back in high school in Washington State, where I competed in speech and debate. I also worked for a few college newspapers, where I discovered I was interested in political communications. When I entered college I originally set out to be political science major, but I quickly realized that the ways of thinking about political communication in the field of rhetoric interested me more.

What is your research agenda?

I study the rhetoric of digital spaces. I’m interested in what changes and what stays the same when we start to think about rhetorical theories in the context of new media and social media. How should our theories change when we think about rhetoric in a digital space? My research here at the Scholarly Commons is about Twitter responses to terrorist events. Some of the questions I’m asking are: How do people on Twitter talk about these events? What are the political communities they’re imagining when they speak about these events? What are the ways of articulating one’s political views in this context?

Do you have any favorite work-related duties?

My favorite work-related duty is talking to the subject specialists at Scholarly Commons. It’s fun to gain insight and new ways of seeing my research by discussing the problems I’m facing to my colleagues. They’re a great resource because their diversity helps me conceptualize my research in new ways.

What are some of your favorite underutilized resources that you would recommend to researchers?

Savvy Researcher Workshops. The workshops for some of the more obscure topics aren’t heavily attended, but they helped me get a gauge on how other people were working on their projects and showed me what tools I should be using.

If you could recommend only one book to beginning researchers in your field, what would you recommend?

I’m cheating and choosing two books, one for rhetoric and one for digital humanities. For rhetoric I’d recommend Still Life with Rhetoric by Laurie Gries. It’s about the digital circulation of images and represents a way of thinking about distributed rhetorical activity in digital contexts. And for digital humanities I’d recommend Reading Machines: Toward an Algorithmic Criticism by Stephen Ramsay. It makes a broader call for “algorithmic criticism” that uses computation as a productive constraint under which humanistic inquiry can take place.

Want to get in touch with Matthew? Send him an email or come visit him at the Scholarly Commons!

Use Sifter for Twitter Research

For many academics, Twitter is an increasingly important source. Whether you love it or hate it, Twitter dominates information dissemination and discourse, and will continue to do so for the foreseeable future. However, actually sorting through Twitter — especially for large-scale projects — can be deceptively difficult, and a deterrent for would-be Twitter scholars. That is why Sifter will go through Twitter for you.

Sifter is a paid service — which will be discussed in greater detail below — which provides search and retrieve access for undeleted Tweets. Retrieved tweets are stored in an Enterprise DiscoverText account, which allows the user to perform data analytics on the Tweets. The DiscoverText account will be part of a fourteen day free trial, but for prolonged use the user will have to pay for account access.

However, Sifter can become prohibitively expensive. Each user can get three free estimates a day. Following that, it is $20 per day of data retrieval and $30 per 100,000 Tweets. Some more expensive purchases (over $500 and $1500, respectively) will receive longer DiscoverText trials with access added for additional users. There are no refunds. So prior to making your purchase, make sure that you have done enough research to know exactly what data you want, and which filters you’d like to use.

Possible filters that you can request when using Sifter.

Have you used Sifter? Or DiscoverText? What was your experience like? Alternatively, do you have a free resource that you prefer to use for Twitter data analytics? Please let us know in the comments!

Pinterest Pages for Researchers

The Pinterest logo.

When one thinks of Pinterest, they tend to associate it with work night crock pot recipes and lifehacks that may or may not always work. But Pinterest can also be a great place to store and share links and information relating to your academic discipline that is widely accessible and free. In this post, we’ll look at how threegroups use Pinterest in different ways to help their mission, then go through some pros and cons of using Pinterest for academic endeavors

Examples of Groups Using Pinterest

A Digital Tool Box for Historians

A Digital Tool Box for Historians is exactly what it says on the tin. On the date this post was written, A Digital tool Box for Historians boasts 124 pins, each a link to a digital resource that can help historians. Resources range from free-to-use websites to pay-to-use software and everything in-between. It is an easy to follow board that is made for easy browsing.

Europeana

Europeana is a website dedicated to collecting and sharing cultural artifacts and art from around the world. Their Pinterest page serves as a virtual museum with pins grouped into thematic boards, as if they were galleries. With over a hundred and fifty boards, their subject matter ranges from broad themes (such as their Birds and Symbolism board), artistic medium (such as their Posters board, or specific artistic movements or artists (such as their Henri Verstijnen – Satirical Drawings board). Pinterest users can then subscribe to favorite boards and share pieces that they find moving, thus increasing the dissemination of pieces that could remain static if only kept on the Europeana website.

Love Your Data Week

Sponsored by — you guessed it — Love Your Data Week, the Love Your Data Week Pinterest board serves as a community place to help institutions prepare for Love Your Data Week. Resources shared on the Love Your Data Week board can either be saved to an institution’s own Love Your Data board, or used on their other social media channels to spark discussion.

Pros and Cons of Pinterest

  • Pros
    • Can spread your work to a non-academic audience
    • Free
    • Easily accessible
    • Easy to use
    • Brings content from other platforms you may use together
    • Visually appealing
    • Well-known
  • Cons
    • Poor tagging and search systems
    • Interface can be difficult to use, especially for users with disabilities
    • Content gets “buried” very quickly
    • Poor for long-format content
    • Non-academic reputation

Whether it’s a gallery, tool kit, or resource aggregation, Pinterest shows potential for growth in academic and research circles. Have you used Pinterest for academics before? How’d it go? Any tips you’d like to give? Let us know in the comments!

 

Spotlight: Postach.io Blogging Platform

Many people use Evernote to keep their research (and life) organized. This notebook-based note-taking platform has grown in popularity so much, that the creators of Evernote created Postach.io, a blogging platform that connects with Evernote, and uses Evernote notes as the content of blog posts. Basically, you can take the notes you’ve created in Evernote and directly publish them for anyone to see!

If you’re someone who is already familiar with, and using Evernote, Postach.io may be a great, free platform for you to get your research out there. While it doesn’t have the same kind of customization options that you can have on WordPress or Tumblr, nor the built-in audiences of those sites, its simplified style and integration with Evernote makes it a useful tool, especially since Postach.io is free, and only requires that you have/create an Evernote account.

To start, you must link up your Evernote account with Postach.io. After submitting your contact information, the site will automatically transfer you to Evernote.

p2

The first step to creating a Postach.io site is to give your name, email address, and password.

p3

The Evernote page that Postach.io links you to.

Evernote will then ask whether you’d like to create a new notebook for your Postach.io site, or link to a notebook already in use. Note that linking to an already-created notebook does not automatically make your notes public. Each note on the site must have a ‘published’ tag attached to it to in order to be public. I’ll have more on that in a little bit.

You can also choose the length of time Postach.io will have access to your notebook. Lengths range from a minimum of one day to a maximum of one year. After that period, Postach.io will either lose access to that notebook, or you will have to reauthorize it.

After you authorize your account, you will have the opportunity to create an Evernote note that will serve as your initial Postach.io post. The most important part of this process is tagging the post as “Published.” A note that lacks this tag will not be put on your Postach.io site, even if it’s in your authorized notebook.

p4

Me adding my “published” tag to ensure that my post is added to my Postach.io site.

Once you finish and tag your post, your Postach.io account is officially up and running.

As far as the site itself, your options are somewhat limited. This is what your site will look like immediately after you publish your first post:

A very generic theme.

A very generic theme.

You do have the option to change your avatar and background image, as well as choose from a little over a dozen themes to work with. These themes, however, are all incredibly basic, with few customization options outside of the basic appearance. In order to access the source code for your site or to create a custom theme, you will need to upgrade your account to a paid account.

A paid account will let you access that source code, as stated above, as well as create multiple sites. With a free account, you can only have one site at a time. $5/month gets you five sites, $15/month will get you twenty sites, and $25/month will give you fifty. If you pay for an entire year in advance, you’ll get two months out of the year free. In my opinion, you’re better off using a free platform like Tumblr or WordPress and transferring your Evernote data than opting for a paid account.

Overall, Postach.io is a simple way to get work that you’ve already started in Evernote published and readable by the world.

Do you think you’ll use Postach.io? What blogging platforms do you use? Let us know in the comments, or Tweet us at @scholcommons!