Election Forecasts and the Importance of Good Data Visualization

In the wake of the 2016 presidential election, many people, on the left and right alike, came together on the internet to express a united sentiment: that the media had called the election wrong. In particular, one man may have received the brunt of this negative attention. Nate Silver and his website FiveThirtyEight have taken nearly endless flak from disgruntled Twitter users over the past two years for their forecast which gave Hillary Clinton a 71.4% chance of winning.

However, as Nate Silver has argued in many articles and tweets, he did not call the race “wrong” at all, everyone else just misinterpreted his forecast. So what really happened? How could Nate Silver say that he wasn’t wrong when so many believe to this day that he was? As believers in good data visualization practice, we here in the Scholarly Commons can tell you that if everyone interprets your data to mean one thing when you really meant it to convey something else entirely, your visualization may be the problem.

Today is Election Day, and once again, FiveThirtyEight has new models out forecasting the various House, Senate, and Governors races on the ballot. However, these models look quite a bit different from 2016’s, and in those differences lie some important data viz lessons. Let’s dive in and see what we can see!

The image above is a screenshot taken from the very top of the page for FiveThirtyEight’s 2016 Presidential Election Forecast, which was last updated on the morning of Election Day 2016. The image shows a bar across the top, filled in blue 71.4% of the way, to represent Clinton’s chance of winning, and red the rest of the 28.6% to represent Trump’s chance of winning. Below this bar is a map of the fifty states, colored from dark red to light red to light blue to dark blue, representative of the percentage chance that each state goes for one of the two candidates.

The model also allows you to get a sense of where exactly each state stands, by hovering your cursor over a particular state. In the above example, we can see a bar similar the one at the top of the national forecast which shows Clinton’s 55.1% chance of winning Florida.

The top line of FiveThirtyEight’s 2018 predictions looks quite a bit different. When you open the House or Senate forecasts, the first thing you see is a bell curve, not a map, as exemplified by the image of the House forecast below.

At first glance, this image may be more difficult to take in than a simple map, but it actually contains a lot of information that is essential to anyone hoping to get a sense of where the election stands. First, the top-line likelihood of each party taking control is expressed as a fraction, rather than as a percent. The reasoning behind this is that some feel that the percent bar from the 2016 model improperly gave the sense that Clinton’s win was a sure thing. The editors at FiveThirtyEight hope that fractions will do a better job than percentages at conveying that the forecasted outcome is not a sure thing.

Beyond this, the bell curve shows forecasted percentage chances for every possible outcome (for example, at the time of writing, this, there is a 2.8% chance that Democrats gain 37 seats, a 1.6% chance that Democrats gain 20 seats, a <0.1% chance that Democrats gain 97 seats, and a <0.1% chance that Republicans gain 12 seats. This visualization shows the inner workings of how the model makes its prediction. Importantly, it strikes home the idea that any result could happen even if one end result is considered more likely. What’s more, the model features a gray rectangle centered around the average result, that highlights the middle 80% of the forecast: there is an 80% chance that the result will be between a Democratic gain of 20 seats (meaning Republicans would hold the House) and a Democratic gain of 54 (a so-called “blue wave”).

The 2018 models do feature maps as well, such as the above map for the Governors forecast. But some distinct changes have been made. First, you have to scroll down to get to the map, hopefully absorbing some important information from the graphs at the top in the meantime. Most prominently, FiveThirtyEight has re-thought the color palette they are using. Whereas the 2016 forecast only featured shades of red and blue, this year the models use gray (House) and white (Senate and Governors) to represent toss-ups and races that only slightly lean one way or the other. If this color scheme had been used in 2016, North Carolina and Florida, both states that ended up going for Trump but were colored blue on the map, would have been much more accurately depicted not as “blue states” but as toss-ups.

Once again, hovering over a state or district gives you a detail of the forecast for that place in particular, but FiveThirtyEight has improved that as well.

Here we can see much more information than was provided in the hover-over function for the 2016 map. Perhaps most importantly, this screen shows us the forecasted vote share for each candidate, including the average, high, and low ends of the prediction. So for example, from the above screenshot for Illinois’ 13th Congressional District (home to the University of Illinois!) we can see that Rodney Davis is projected to win, but there is a very real scenario in which Betsy Dirksen Londrigan ends up beating him.

FiveThirtyEight did not significantly change how their models make predictions between 2016 and this year. The data itself is treated in roughly the same way. But as we can see from these comparisons, the way that this data is presented can make a big difference in terms of how we interpret it. 

Will these efforts at better data visualization be enough to deter angry reactions to how the model correlates with actual election results? We’ll just have to tune in to the replies on Nate Silver’s twitter account tomorrow morning to find out… In the meantime, check out their House, Senate, and Governors  forecasts for yourself!

 

All screenshots taken from fivethirtyeight.com. Images of the 2016 models reflect the “Polls-only” forecast. Images of the 2018 models reflect the “Classic” forecasts as of the end of the day on November 5th 2018.

Lightning Review: Data Visualization for Success

Data visualization is where the humanities and sciences meet: viewers are dazzled by the presentation yet informed by research. Lovingly referred to as “the poster child of interdisciplinarity” by Steven Braun, data visualization brings these two fields closer together than ever to help provide insights that may have been impossible without the other. In his book Data Visualization for Success, Braun sits down with forty designers with experience in the field to discuss their approaches to data visualization, common techniques in their work, and tips for beginners.

Braun’s collection of interviews provides an accessible introduction into data visualization. Not only is the book filled with rich images, but each interview is short and meant to offer an individual’s perspective on their own work and the field at large. Each interview begins with a general question about data visualization to contribute to the perpetual debate of what data visualization is and can be moving forward.

Picture of Braun's "Data Visualization for Success"

Antonio Farach, one of the designers interviewed in the book, calls data visualization “the future of storytelling.” And when you see his work – or really any of the work in this book – you can see why. Each new image has an immediate draw, but it is impossible to move past without exploring a rich narrative. Visualizations in this book cover topics ranging from soccer matches to classic literature, economic disparities, selfie culture, and beyond.

Each interview ends by asking the designer for their advice to beginners, which not only invites new scholars and designers to participate in the field but also dispels any doubt of the hard work put in by these designers or the science at the root of it all. However, Barbara Hahn and Christine Zimmermann of Han+Zimmermann may have put it best, “Data visualization is not making boring data look fancy and interesting. Data visualization is about communicating specific content and giving equal weight to information and aesthetics.”

A leisurely, stunning, yet informative read, Data Visualization for Success offers anyone interested in this explosive field an insider’s look from voices around the world. Drop by the Scholarly Commons during our regular hours to flip through this wonderful read.

And finally, if you have any further interest in data visualization make sure you stay up to date on our Exploring Data Visualization series or take a look at what services the Scholarly Commons provides!

Analyze and Visualize Your Humanities Data with Palladio

How do you make sense of hundreds of years of handwritten scholarly correspondence? Humanists at Stanford University had the same question, and developed the project Mapping the Republic of Letters to answer it. The project maps scholarly social networks in a time when exchanging ideas meant waiting months for a letter to arrive from across the Atlantic, not mere seconds for a tweet to show up in your feed. The tools used in this project inspired the Humanities + Design lab at Stanford University to create a set of free tools specifically designed for historical data, which can be multi-dimensional and not suitable for analysis with statistical software. Enter Palladio!

To start mapping connections in Palladio, you first need some structured, tabular data. An Excel spreadsheet in CSV format with data that is categorized and sorted is sufficient. Once you have your data, just upload it and get analyzing. Palladio likes data about two types of things: people and places. The sample data Palladio provides is information about influential people who visited or were otherwise connected with the itty bitty country of Monaco. Read on for some cool things you can do with historical data.

Mapping

Use the Map feature to mark coordinates and connections between them. Using the sample data that HD Lab provided, I created the map below, which shows birthplaces and arrival points. Hovering over the connection shows you the direction of the move. By default, you can change the map itself to be standard maps like satellite or terrain, or even just land masses with no human-created geography, like roads or place names.

Map of Mediterranean sea and surrounding lands of Europe, red lines across map show movement, all end in Monaco

One person in our dataset was born in Galicia, and later arrived in Monaco.

But, what if you want to combine this new-fangled spatial analysis with something actually historic? You’re in luck! Palladio allows you to use other maps as bases, provided that the map has been georeferenced (assigned coordinates based on locations represented on the image). The New York Public Library’s Map Warper is a collection of some georeferenced maps. Now you can show movement on a map that’s actually from the time period you’re studying!

Same red lines across map as above, but image of map itself is a historical map

The same birthplace to arrival point data, but now with an older map!

Network Graphs

Perhaps the connections you want to see don’t make sense to be on a map, like those between people. This is where the Graph feature comes in. Graph allows you to create network visualizations based on different facets of your data. In general, network graphs display relationships between entities, and work best if all your nodes (dots) are the same type of information. They are especially useful to show connections between people, but our sample data doesn’t have that information. Instead, we can visualize our peoples’ occupation by gender.

network graph shows connections between peoples' occupations and their gender

Most occupations have both males and females, but only males are Monegasque, Author, Gambler, or Journalist, and only females are Aristocracy or Spouse.

The network graph makes it especially visible that there are some slight inconsistencies in the data; at least one person has “Aristocracy” as an occupation, while others have “Aristocrat.” Cleaning and standardizing your data is key! That sounds like a job for…OpenRefine!

Timelines

All of the tools in Palladio have the same Timeline functionality. This basically allows you to filter the data used in your visualization by a date, whether that’s birthdate, date of death, publication date, or whatever timey wimey stuff you have in your dataset. Other types of data can be filtered using the Facet function, right next to the Timeline. Play around with filtering, and watch your visualization change.

Try Palladio today! If you need more direction, check out this step-by-step tutorial by Miriam Posner. The tutorial is a few years old so the interface has changed slightly, so don’t panic if the buttons look different!

Did you create something cool in Palladio? Post a comment below, or tell us about it on Twitter!

 

Lightning Review: the truthful art by Alberto Cairo

Image of the truthful art

Hailed by one of our librarians as a brilliant and seminal text to understanding data visualization, the truthful art is a text that can serve both novices and masters in the field of visualization.

Packed with detailed descriptions, explanations, and images of just how Cairo wants readers to understand and engage with knowledge and data. Nearly every page of this work, in fact, is packed with examples of the methods Cairo is trying to connect his readers to.

Cairo’s work not only teaches readers how to best design their own visualizations, but goes into the process of explaining how to *read* data visualizations themselves. Portions of chapters are devoted to the necessity of ‘truthful’ visualizations, not only because “if someone hides data from you, they probably have something to hide” (Cairo, 2016, p. 49). The exact same data, when presented in different ways, can completely change the audience’s perspective on what the ‘truth’ of the matter is.

The most I read through the truthful art, the harder time I had putting it down. Cairo’s presentations of data, how vastly they could differ depending upon the medium through which they were visualized. It was amazing how Cairo could instantly pick apart a bad visualization, replacing it with one that was simultaneously more truthful and more beautiful.

There is specific portion of Chapter 2 where Cairo gives a very interesting visualization of “How Chicago Changed the Course of Its Rivers”. It’s detailed, informative, and very much a classic data visualization.

Then he compared it to a fountain.

The fountain was beautiful, and designed in a way to tell the same story as the maps Cairo had created. It was fascinating to see data presented in such a way, and I hadn’t fully considered that data could be represented in such a unique way.

the truthful art is here on our shelves in the Scholarly Commons, and we hope you’ll stop and give it a read! It’s certainly worthwhile one!

Introducing Megan Ozeran, Data Analytics and Visualization Resident Librarian

Photograph of Megan Ozeran

This latest installment of our series of interviews with Scholarly Commons experts and affiliates features Megan Ozeran, the Data Analytics and Visualization Resident Librarian at the Scholarly Commons.Welcome, Megan!


What is your background and work experience?

I received a BA in Media Studies with an English minor from Pomona College (in sunny southern California). After graduating I couldn’t justify going to grad school for Cultural Studies, as much as the subject area fascinated me. The obvious career path with that degree is to become a professor, which I didn’t want to do. After some time unemployed I started a job as a worker’s compensation claims adjuster, which taught me a lot about our broken healthcare system and was generally dissatisfying. My father, a former surgeon and active in health policy, started a health information technology company so I quit insurance and started working for him.

This job is where I learned about computer programming, user interface design, business intelligence, strategic planning, and attending industry conferences. After a couple of years I decided to go to library school. I enrolled at San Jose State University and started volunteering for a local independent LGBTQ library to gain real-world experience. (Check out California’s wonderful Lavender Library). After a semester I started a part time job at a small community college library and quit the health IT business. The community college library ended up being too small for me to gain as much experience as I hoped, so I took a summer internship at California State University Northridge where I explored three different aspects of digital services: the institutional repository, digitization of special collections, and electronic resources management. After receiving my MLIS this past May, I applied for a dozen jobs and eventually moved 2000 miles to be Illinois’ Data Analytics and Visualization Resident Librarian.

What led you to this field?

When I was struggling to decide on a career path, I stumbled across the library sector and dug deeper. I saw that there were so many different kinds of jobs working in libraries, in large part because of social and technological shifts, and many of these jobs intrigued me. Around the time of my internship I created a personal career mission: to use current and emerging technologies to enhance access to information and resources. It’s all about harnessing the power of technology to empower people.

What is your research agenda?

I’m exploring the ethics of data analysis and data visualization. We have tools to analyze an astonishing amount and variety of data, but how many people critically evaluate their assumptions and decisions when performing these analyses? How many people are taught to consider ethical principles when they are taught software and algorithms? How many people consider ethical principles when they design data visualizations? Algorithms and analytics are increasingly running people’s lives, so we need to ensure that we deploy them ethically.

Do you have any favorite work-related duties?

I’m still very new so I’m constantly learning, which is both challenging and exciting. My favorite part has been connecting with researchers (whether students, faculty or staff) to learn about the great research projects they are doing on campus.

What are some of your favorite underutilized resources that you would recommend to researchers?

I’m not sure how many researchers know that the Scholarly Commons lab is a great place to come and explore your data if you’re not set on a specific analysis process. Our computers have an extensive collection of software that you can use to analyze either quantitative or qualitative data. Importantly, you can come try out software that might otherwise be very expensive.

Also, I am an underutilized resource! I’m still learning, but if you have data analytics or visualization questions, stop by Scholarly Commons or shoot me an email and we can set up a time to chat.

If you could recommend only one book to beginning researchers in your field, what would you recommend?

Doing Data Science by Rachel Schutt and Cathy O’Neil is a great primer to all things data science (we have the ebook version in the catalog). I’m still learning myself, so I’m open to recommendations, too!

Neatline 101: Getting Started

Here at Commons Knowledge we love easy-to-use interactive map creation software! We’ve compared and contrasted different tools, and talked about StoryMap JS and Shanti Interactive. The Scholarly Commons is a great place to get help on GIS projects, from ArcGIS StoryMaps and beyond. But if you want something where you can have both a map and a timeline, and if you are willing to spend money on your own server, definitely consider using Neatline.

Neatline is a plugin created by Scholar’s Lab at University of Virginia that lets you create interactive maps and timelines in Omeka exhibits. My personal favorite example is the demo site by Paul Mawyer “‘I am it and it is I’: Lovecraft in Providence” with the map tiles from Stamen Design under CC-BY 3.0 license.

Screenshot of Lovecraft Neatline exhibit

*As far as the location of Lovecraft’s most famous creation, let’s just say “Ph’nglui mglw’nafh Cthulhu R’lyeh wgah’nagl fhtagn.”

Now one caveat — Neatline requires a server. I used Reclaim Hosting which is straightforward, and which I have used for Scalar and Mukurtu. The cheapest plan available on Reclaim Hosting was $32 a year. Once I signed up for the website and domain name, I took advantage of one nice feature of Reclaim Hosting, which lets you one-click install the Omeka.org content management system (CMS). The Omeka CMS is a popular choice for digital humanities users. Other popular content management systems include Wordpress and Scalar.

One click install of Omeka through Reclaim Hosting

BUT WAIT, WHAT ABOUT OMEKA THROUGH SCHOLARLY COMMONS?

Here at the Scholarly Commons we can set up an Omeka.net site for you. You can find more information on setting up an Omeka.net site through the Scholarly Commons here. This is a great option for people who want to create a regular Omeka exhibit. However, Neatline is only available as a plugin on Omeka.org, which needs a server to host. As far as I know, there is currently no Neatline plugin for Omeka.net and I don’t think that will be happening anytime soon. On Reclaim you can install Omeka on any LAMP server. And side advice from your very forgetful blogger, write down whatever username and password you make up when you set up your Omeka site, that will save you a lot of trouble later, especially considering how many accounts you end up with when you use a server to host a site.

Okay, I’m still interested, but what do I do once I have Omeka.org installed? 

So back to the demo. I used the instructions on the documentation page on Neatline, which were good for defining a lot of the terms but not so good at explaining exactly what to do. I am focusing on the original Neatline plugin but there are other Neatline plugins like NeatlineText depending on your needs. However all plugins are installed in a similar way. You can follow the official instructions here at Installing Neatline.

But I have also provided some because the official instructions just didn’t do it for me.

So first off, download the Neatline zip file.

Go to your Control Panel, cPanel in Reclaim Hosting, and click on “File Manager.”

File Manager circled on Reclaim Hosting

Sorry this looks so goofy, Windows snipping tool free form is only for those with a steady hand.

Navigate to the the Plugins folder.

arrow points at plugins folder in file manager

Double click to open the folder. Click Upload Files.

more arrows pointing at tiny upload option in Plugins folder

If you’re using Reclaim Hosting, IGNORE THE INSTRUCTIONS DO NOT UNZIP THE ZIP FILE ON YOUR COMPUTER JUST PLOP THAT PUPPY RIGHT INTO YOUR PLUGINS FOLDER.

Upload the entire zip file

                      Plop it in!

Go back to the Plugins folder. Right click the Neatline zip file and click extract. Save extracted files in Plugins.

Extract Neatline files in File Manager

Sign into your Omeka site at [yourdomainname].[com/name/whatever]/admin if you aren’t already.

Omeka dashboard with arrows pointing at Plugins

Install Neatline for real.

Omeka Plugins page

Still confused or having trouble with setup?

Check out these tutorials as well!

Open Street Maps is great and all but what if I want to create a fancy historical map?

To create historical maps on Neatline you have two options, only one of which is included in the actual documentation for Neatline.

Officially, you are supposed to use GeoServer. GeoServer is an open source server application built in Java. Even if you have your own server, it has a lot more dependencies to run than what’s required for Omeka / Neatline.

If you want one-click Neatline installation with GeoServer and have money to spend you might want to check out AcuGIS Neatline Cloud Hosting which is recommended in the Neatline documentation and the lowest cost plan starts at $250 a year.

Unofficially, there is a tutorial for this available at Lincoln Mullen’s blog “The Backward Glance” specifically his 2015 post “How to Use Neatline with Map Warper Instead of Geoserver.”

Let us know about the ways you incorporate geospatial data in your research!  And stay tuned for Neatline 102: Creating a simple exhibit!

Works Cited:

Extending Omeka with Plugins. (2016, July 5). Retrieved May 23, 2017, from http://history2016.doingdh.org/week-1-wednesday/extending-omeka-with-plugins/

Installing Neatline Neatline Documentation. (n.d.). Retrieved May 23, 2017, from http://docs.neatline.org/installing-neatline.html

Mawyer, Paul. (n.d.). “I am it and it is I”: Lovecraft in Providence. Retrieved May 23, 2017, from http://lovecraft.neatline.org/neatline-exhibits/show/lovecraft-in-providence/fullscreen

Mullen, Lincoln. (2015).  “How to Use Neatline with Map Warper Instead of Geoserver.” Retrieved May 23, 2017 from http://lincolnmullen.com/blog/how-to-use-neatline-with-map-warper-instead-of-geoserver/

Uploading Plugins to Omeka. (n.d.). Retrieved May 23, 2017, from https://community.reclaimhosting.com/t/uploading-plugins-to-omeka/195

Working with Omeka. (n.d.). Retrieved May 23, 2017, from https://community.reclaimhosting.com/t/working-with-omeka/194

Adventures at the Spring 2017 Library Hackathon

This year I participated in an event called HackCulture: A Hackathon for the Humanities, which was organized by the University Library. This interdisciplinary hackathon brought together participants and judges from a variety of fields.

This event is different than your average campus hackathon. For one, it’s about expanding humanities knowledge. In this event, teams of undergraduate and graduate students — typically affiliated with the iSchool in some way — spend a few weeks working on data-driven projects related to humanities research topics. This year, in celebration of the sesquicentennial of the University of Illinois at Urbana-Champaign, we looked at data about a variety of facets of university life provided by the University Archives.

This was a good experience. We got firsthand experience working with data; though my teammates and I struggled with OpenRefine and so we ended up coding data by hand. I now way too much about the majors that are available at UIUC and how many majors have only come into existence in the last thirty years. It is always cool to see how much has changed and how much has stayed the same.

The other big challenge we had was not everyone on the team had experience with design, and trying to convince folks not to fall into certain traps was tricky.

For an idea of how our group functioned, I outlined how we were feeling during the various checkpoints across the process.

Opening:

We had grand plans and great dreams and all kinds of data to work with. How young and naive we were.

Midpoint Check:

Laura was working on the Python script and sent a well-timed email about what was and wasn’t possible to get done in the time we were given. I find public speaking challenging so that was not my favorite workshop. I would say it went alright.

Final:

We prevailed and presented something that worked in public. Laura wrote a great Python script and cleaned up a lot of the data. You can even find it here. One day in the near future it will be in IDEALS as well where you can already check out projects from our fellow humanities hackers.

Key takeaways:

  • Choose your teammates wisely; try to pick a team of folks you’ve worked with in advance. Working with a mix of new and not-so-new people in a short time frame is hard.
  • Talk to your potential client base! This was definitely something we should have done more of.
  • Go to workshops and ask for help. I wish we had asked for more help.
  • Practicing your presentation in advance as well as usability testing is key. Yes, using the actual Usability Lab at Scholarly Commons is ideal but at the very least take time to make sure the instructions for using what you created are accurate. It’s amazing what steps you will leave off when you have used an app more than twice. Similarly make sure that you can run your program and another program at the same time because if you can’t chances are it means you might crash someone’s browser when they use it.

Overall, if you get a chance to participate in a library hackathon, go for it, it’s a great way to do a cool project and get more experience working with data!

Learning how to present with Michael Alley’s The Craft of Scientific Presentations

Slideshows are serious business, and bad slides can kill. Many books, including the one I will review today, discuss the role that Morton Thiokol’s poorly designed and overly complicated slides about the Challenger O-rings played in why the shuttle was allowed to launch despite its flaws. PowerPoint has become the default presentation style in a wide range of fields — regardless of whether or not that is a good idea, see the 2014 Slate article “PowerPointLess” by Rebecca Schuman.  With all that being said, in order to learn a bit more about how to present, I read The Craft of Scientific Presentations by Michael Alley, an engineering communications professor at Penn State.

To start, what did Lise Meitner, Barbara McClintock, and Rosalind Franklin have in common? According to Michael Alley, their weak science communication skills meant they were not taken as seriously even though they had great ideas and did great research… Yes, the author discusses how Niels Bohr was a very weak speaker (which only somewhat had to do with English being his third language) but it’s mostly in the context of his Nobel Prize speech or trying to talk to Winston Churchill; in other words, the kinds of opportunities that many great women in science never got… Let’s just say the decontextualized history of science factoids weaken some of the author’s arguments…

This is not to say that science communication is not important but these are some important ideas to remember:

Things presentation skills can help you with:

  • Communicating your ideas with a variety of audiences more effectively
  • Marketing your research and yourself as a researcher more effectively
  • Creating engaging presentations that people pay attention to

Things presentation skills cannot help you with:

  • Overcoming systemic inequality in academia and society at large, though speaking out about your experiences and calling out injustice when you see it can help in a very long term way
  • Not feeling nervous especially if you have an underlying anxiety disorder, though practice can potentially reduce that feeling

For any presentation:  know your topic well, be very prepared, and actually practice giving your talk more than you do anything else (such as making slides). But like any skill, the key is practice practice practice!

For the most part, this book is a great review of the common sense advice that’s easy to forget when you are standing in front of a large audience with everyone looking at you expectantly. The author also offers a lot of great critiques of the default presentations you can churn out with PowerPoint and of PowerPoint itself. PowerPoint has the advantage of being the most common type of slideshow presentation software, though alternatives exist and have been discussed in depth elsewhere on the blog and in university resources. Alley introduces the Assertion-Evidence approach in which you reach people through presenting your research as memes images with text statement overlay. Specifically, you use one sentence summaries and replace bullet points with visualizations. Also you have to keep in account Murphy’s Law, where slide color or a  standard font not being supported can throw off a presentation. Since Murphy’s Law does not disappear when you create a presentation around visuals, especially custom-made images and video, you may need more preparation time for this style of presentation.

Creating visualizations and one sentence summaries as well as practicing your speech to prepare for these things not working is a great strategy for preparing for a research talk. One interesting thing to think about is if Alley admits that less tested methods like TED (Technology-Entertainment-Design) and pecha kucha work for effective presentations, how much of the success of this method has to do with people caring and putting time into their presentation than a change in presentation style?

Overall this book was a good review of public speaking advice specifically targeted towards a science and engineering audience and hopefully will get people taking more time and thinking more about their presentations.

Presentation resources on campus:

  • For science specific, the definitely check out our new science communication certificate through the 21st Century Scientists Working Group and the Center for Innovation in Teaching and Learning. They offer a variety of workshops and opportunities for students develop their skills as science communicators. There’s also science communication workshops throughout the country over the summer.
  • If you have time join a speech or debate team (Mock Trial or parliamentary style debate in particular)  it’s the best way to learn how to speak extemporaneously, answer hostile questions on the fly, and get coaching and feedback on what you need to work on. If you’re feeling really bold, performing improv comedy can help with these skills as well.
  • If you don’t have time to be part of a debate team or you can’t say “yes and…” to joining an improv comedy troupe take advantage of opportunities to present when you can at various events around campus. For example, this year’s Pecha Kucha Night is going to be June 10th at Krannert Center and applications are due by April 30!  If this is still too much find someone, whether in your unit, the Career Center, etc. who will listen to you talk about your research. Or if you have motivation and don’t mind cringe get one of your friends to record you presenting (if you don’t want to use your phone for this check out the loanable tech at the UGL!)

And for further reading take a look at:

http://guides.library.illinois.edu/presentation/getting_started

Hope this helps, and good luck with your research presentations!

Creating Eye-Catching and Collaborative Charts with LucidChart

A sample chart I whipped up in just a few minutes.

Sometimes the way that you display your data can be just as important as the data itself. However, for those of us who are less artistically-inclined, finding a way to present our ideas in clear, appealing ways can be difficult. That is why LucidChart can be a powerful ally to help you in your quest to present your research!

LucidChart is a free online tool (though there are paid storage packages for heavy users) that allows you to create and share various kinds of charts, with options ranging from mind maps to Cisco Network diagrams, cause and effect diagrams to floor plans. The categories that LucidChart sorts their standard templates into are: Android, Business Analysis, Education, Engineering, Entity Relationship (ERD), Floorplan, Flowchart, Mind Map, Network, Network Infastructure, Org Chart, Other, Site Map, UML, Venn Diagram, Wireframe, and iOS. You can also create and save personal templates. Each of the many options can be customized, and if elements from other templates can be added to whatever chart you are using.

The chart template selection screen.

LucidChart takes (some) of the difficulty out of designing a chart. While you have the option to change every aspect of the chart, you can also use the recommended shapes, colors, and lay-outs that LucidChart provides for you. While every template will need at least a little tweaking (because all data is different), these options can make the process of creating your chart less stressful and quicker.

The basic work space for LucidChart.

One of the greatest aspects of LucidChart is your ability to share charts. Similar to other collaborative creation websites like GoogleDocs, you have the ability to send the link out to collaborators. You can then allow collaborators to edit, comment on, and/or view your document. You can also share your document on social media, or embed it on a website. A chat option makes for easy commentary on your chart, as well.

Overall, LucidChart is a great data visualization tool, especially for newcomers who may need a helping hand with creating charts that adequately communicate their ideas to others!

Scholarly Smackdown: StoryMap JS vs. Story Maps

In today’s very spatial Scholarly Smackdown post we are covering two popular mapping visualization products, Story Maps and StoryMap JS.Yes they both have “story” and “map” in the name and they both let you create interactive multimedia maps without needing a server. However, they are different products!

StoryMap JS

StoryMap JS, from the Knight Lab at Northwestern, is a simple tool for creating interactive maps and timelines for journalists and historians with limited technical experience.

One  example of a project on StoryMap JS is “Hockey, hip-hop, and other Green Line highlights” by Andy Sturdevant for the Minneapolis Post, which connects the stops of the Green Line train to historical and cultural sites of St. Paul and Minneapolis Minnesota.

StoryMap JS uses Google products and map software from OpenStreetMap.

Using the StoryMap JS editor, you create slides with uploaded or linked media within their template. You then search the map and select a location and the slide will connect with the selected point. You can embed your finished map into your website, but Google-based links can deteriorate over time! So save copies of all your files!

More advanced users will enjoy the Gigapixel mode which allows users to create exhibits around an uploaded image or a historic map.

Story Maps

Story maps is a custom map-based exhibit tool based on ArcGIS online.

My favorite example of a project on Story Maps is The Great New Zealand Road Trip by Andrew Douglas-Clifford, which makes me want to drop everything and go to New Zealand (and learn to drive). But honestly, I can spend all day looking at the different examples in the Story Maps Gallery.

Story Maps offers a greater number of ways to display stories than StoryMap JS, especially in the paid version. The paid version even includes a crowdsourced Story Map where you can incorporate content from respondents, such as their 2016 GIS Day Events map.

With a free non-commercial public ArcGIS Online account you can create a variety of types of maps. Although it does not appear there is to overlay a historical map, there is a comparison tool which could be used to show changes over time. In the free edition of this software you have to use images hosted elsewhere, such as in Google Photos. Story Maps are created through their wizard where you add links to photos/videos, followed by information about these objects, and then search and add the location. It is very easy to use and almost as easy as StoryMap JS. However, since this is a proprietary software there are limits to what you can do with the free account and perhaps worries about pricing and accessing materials at a later date.

Overall, can’t really say there’s a clear winner. If you need to tell a story with a map, both software do a fine job, StoryMap JS is in my totally unscientific opinion slightly easier to use, but we have workshops for Story Maps here at Scholarly Commons!  Either way you will be fine even with limited technical or map making experience.

If you are interested in learning more about data visualization, ArcGIS Story Maps, or geopatial data in general, check out these upcoming workshops here at Scholarly Commons, or contact our GIS expert, James Whitacre!