Wikidata and Wikidata Human Gender Indicators (WHGI)

Wikipedia is a central player in online knowledge production and sharing. Since its founding in 2001, Wikipedia has been committed to open access and open editing, which has made it the most popular reference work on the web. Though students are still warned away from using Wikipedia as a source in their scholarship, it presents well-researched information in an accessible and ostensibly democratic way.

Most people know Wikipedia from its high ranking in most internet searches and tend to use it for its encyclopedic value. The Wikimedia Foundation—which runs Wikipedia—has several other projects which seek to provide free access to knowledge. Among those are Wikimedia Commons, which offers free photos; Wikiversity, which offers free educational materials; and Wikidata, which provides structured data to support the other wikis.

The Wikidata logo

Wikidata provides structured data to support Wikimedia and other Wikimedia Foundation projects

Wikidata is a great tool to study how Wikipedia is structured and what information is available through the online encyclopedia. Since it is presented as structured data, it can be analyze quantitatively more easily than Wikipedia articles. This has led to many projects that allow users to explore data through visualizations, queries, and other means. Wikidata offers a page of Tools that can be used to analyze Wikidata more quickly and efficiently, as well as Data Access instructions for how to use data from the site.

The webpage for the Wikidata Human Gender Indicators project

The home page for the Wikidata Human Gender Indicators project

An example of a project born out of Wikidata is the Wikidata Human Gender Indicators (WHGI) project. The project uses metadata from Wikidata entries about people to analyze trends in gender disparity over time and across cultures. The project presents the raw data for download, as well as charts and an article written about the discoveries the researchers made while compiling the data. Some of the visualizations they present are confusing (perhaps they could benefit from reading our Lightning Review of Data Visualization for Success), but they succeed in conveying important trends that reveal a bias toward articles about men, as well as an interesting phenomenon surrounding celebrities. Some regions will have a better ratio of women to men biographies due to many articles being written about actresses and female musicians, which reflects cultural differences surrounding fame and gender.

Of course, like many data sources, Wikidata is not perfect. The creators of the WHGI project frequently discovered that articles did not have complete metadata related to gender or nationality, which greatly influenced their ability to analyze the trends present on Wikipedia related to those areas. Since Wikipedia and Wikidata are open to editing by anyone and are governed by practices that the community has agreed upon, it is important for Wikipedians to consider including more metadata in their articles so that researchers can use that data in new and exciting ways.

An animated gif of the Wikipedia logo bouncing like a ball

New Uses for Old Technology at the Arctic World Archive

In this era of rapid technological change, it is easy to fall into the mindset that the “big new thing” is always an improvement on the technology that came before it. Certainly this is often true, and here in the Scholarly Commons we are always seeking innovative new tools to help you out with your research. However, every now and then it’s nice to just slow down and take the time to appreciate the strengths and benefits of older technology that has largely fallen out of use.

A photo of the arctic

There is perhaps no better example of this than the Arctic World Archive, a facility on the Norwegian archipelago of Svalbard. Opened in 2017, the Arctic World Archive seeks to preserve the world’s most important cultural, political, and literary works in a way that will ensure that no manner of catastrophe, man-made or otherwise, could destroy them.

If this is all sounding familiar to you, that’s because you’ve probably heard of the Arctic World Archive’s older sibling, the Svalbard Global Seed Vault. The Global Seed Vault, which is much better known and older than the Arctic World Archive, is an archive seeds from around the world, meant to ensure that humanity would be able to continue growing crops and making food in the event of a catastrophe that wipes out plant life.

Indeed, the two archives have a lot in common. The World Archive is housed deep within a mountain in an abandoned coal mine that once served as the location of the seed vault, and was founded to be for cultural heritage what the seed vault is for crops. But the Arctic World Archive has made truly innovative use of old technology that makes it a truly impressive site in its own right.

A photo of the arctic

Perhaps the coolest (pun intended) aspect of the Arctic World Archive is the fact that it does not require electricity to operate. It’s extreme northern location (it is near the northernmost town of at least 1,000 people in the world) means that the temperature inside the facility is naturally very cold year-round. As any archivist or rare book librarian who brings a fleece jacket to work in the summer will happily tell you, colder temperatures are ideal for preserving documents, and the ability to store items in a very cold climate without the use of electricity makes the World Archive perfect for sustainable, long-term storage.

But that’s not all: in a real blast from the past, all information stored in this facility is kept on microfilm. Now, I know what you’re thinking: “it’s the 21st century, grandpa! No one uses microfilm anymore!”

It’s true that microfilm is used by a very small minority of people nowadays, but nevertheless it offers distinct advantages that newer digital media just can’t compete with. For example, microfilm is rated to last for at least 500 years without corruption, whereas digital files may not last anywhere near that long. Beyond that, the film format means that the archive is totally independent from the internet, and will outlast any major catastrophe that disrupts part or all of our society’s web capabilities.

A photo of a seal

The Archive is still growing, but it is already home to film versions of Edvard Munch’s The Scream, Dante’s The Divine Comedy, and an assortment of government documents from many countries including Norway, Brazil, and the United States.

As it continues to grow, its importance as a place of safekeeping for the world’s cultural heritage will hopefully serve as a reminder that sometimes, older technology has upsides that new tech just can’t compete with.

Exploring Data Visualization #9

In this monthly series, I share a combination of cool data visualizations, useful tools and resources, and other visualization miscellany. The field of data visualization is full of experts who publish insights in books and on blogs, and I’ll be using this series to introduce you to a few of them. You can find previous posts by looking at the Exploring Data Visualization tag.

Map of election districts colored red or blue based on predicted 2018 midterm election outcome

This map breaks down likely outcomes of the 2018 Midterm elections by district.

 

Seniors at Montgomery Blair High School in Silver Spring, Maryland created the ORACLE of Blair 2018 House Election Forecast, a website that hosts visualizations that predict outcomes for the 2018 Midterm Elections. In addition to breakdowns of voting outcome by state and district, the students compiled descriptions of how the district has voted historically and what are important stances for current candidates. How well do these predictions match up with the results from Tuesday?

A chart showing price changes for 15 items from 1998 to 2018

This chart shows price changes over the last 20 years. It gives the impression that these price changes are always steady, but that isn’t the case for all products.

Lisa Rost at Datawrapper created a chart—building on the work of Olivier Ballou—that shows the change in the price of goods using the Consumer Price Index. She provides detailed coverage of how her chart is put together, as well as making clear what is missing from both hers and Ballou’s chart based on what products are chosen to show on the graph. This behind-the-scenes information provides useful advise for how to read and design charts that are clear and informative.

An image showing a scale of scientific visualizations from figurative on the left to abstract on the right.

There are a lot of ways to make scientific research accessible through data visualization.

Visualization isn’t just charts and graphs—it’s all manner of visual objects that contribute information to a piece. Jen Christiansen, the Senior Graphics Editor at Scientific American, knows this well, and her blog post “Visualizing Science: Illustration and Beyond” on Scientific American covers some key elements of what it takes to make engaging and clear scientific graphics and visualizations. She shares lessons learned at all levels of creating visualizations, as well as covering a few ways to visualize uncertainty and the unknown.

I hope you enjoyed this data visualization news! If you have any data visualization questions, please feel free to email the Scholarly Commons.

Election Forecasts and the Importance of Good Data Visualization

In the wake of the 2016 presidential election, many people, on the left and right alike, came together on the internet to express a united sentiment: that the media had called the election wrong. In particular, one man may have received the brunt of this negative attention. Nate Silver and his website FiveThirtyEight have taken nearly endless flak from disgruntled Twitter users over the past two years for their forecast which gave Hillary Clinton a 71.4% chance of winning.

However, as Nate Silver has argued in many articles and tweets, he did not call the race “wrong” at all, everyone else just misinterpreted his forecast. So what really happened? How could Nate Silver say that he wasn’t wrong when so many believe to this day that he was? As believers in good data visualization practice, we here in the Scholarly Commons can tell you that if everyone interprets your data to mean one thing when you really meant it to convey something else entirely, your visualization may be the problem.

Today is Election Day, and once again, FiveThirtyEight has new models out forecasting the various House, Senate, and Governors races on the ballot. However, these models look quite a bit different from 2016’s, and in those differences lie some important data viz lessons. Let’s dive in and see what we can see!

The image above is a screenshot taken from the very top of the page for FiveThirtyEight’s 2016 Presidential Election Forecast, which was last updated on the morning of Election Day 2016. The image shows a bar across the top, filled in blue 71.4% of the way, to represent Clinton’s chance of winning, and red the rest of the 28.6% to represent Trump’s chance of winning. Below this bar is a map of the fifty states, colored from dark red to light red to light blue to dark blue, representative of the percentage chance that each state goes for one of the two candidates.

The model also allows you to get a sense of where exactly each state stands, by hovering your cursor over a particular state. In the above example, we can see a bar similar the one at the top of the national forecast which shows Clinton’s 55.1% chance of winning Florida.

The top line of FiveThirtyEight’s 2018 predictions looks quite a bit different. When you open the House or Senate forecasts, the first thing you see is a bell curve, not a map, as exemplified by the image of the House forecast below.

At first glance, this image may be more difficult to take in than a simple map, but it actually contains a lot of information that is essential to anyone hoping to get a sense of where the election stands. First, the top-line likelihood of each party taking control is expressed as a fraction, rather than as a percent. The reasoning behind this is that some feel that the percent bar from the 2016 model improperly gave the sense that Clinton’s win was a sure thing. The editors at FiveThirtyEight hope that fractions will do a better job than percentages at conveying that the forecasted outcome is not a sure thing.

Beyond this, the bell curve shows forecasted percentage chances for every possible outcome (for example, at the time of writing, this, there is a 2.8% chance that Democrats gain 37 seats, a 1.6% chance that Democrats gain 20 seats, a <0.1% chance that Democrats gain 97 seats, and a <0.1% chance that Republicans gain 12 seats. This visualization shows the inner workings of how the model makes its prediction. Importantly, it strikes home the idea that any result could happen even if one end result is considered more likely. What’s more, the model features a gray rectangle centered around the average result, that highlights the middle 80% of the forecast: there is an 80% chance that the result will be between a Democratic gain of 20 seats (meaning Republicans would hold the House) and a Democratic gain of 54 (a so-called “blue wave”).

The 2018 models do feature maps as well, such as the above map for the Governors forecast. But some distinct changes have been made. First, you have to scroll down to get to the map, hopefully absorbing some important information from the graphs at the top in the meantime. Most prominently, FiveThirtyEight has re-thought the color palette they are using. Whereas the 2016 forecast only featured shades of red and blue, this year the models use gray (House) and white (Senate and Governors) to represent toss-ups and races that only slightly lean one way or the other. If this color scheme had been used in 2016, North Carolina and Florida, both states that ended up going for Trump but were colored blue on the map, would have been much more accurately depicted not as “blue states” but as toss-ups.

Once again, hovering over a state or district gives you a detail of the forecast for that place in particular, but FiveThirtyEight has improved that as well.

Here we can see much more information than was provided in the hover-over function for the 2016 map. Perhaps most importantly, this screen shows us the forecasted vote share for each candidate, including the average, high, and low ends of the prediction. So for example, from the above screenshot for Illinois’ 13th Congressional District (home to the University of Illinois!) we can see that Rodney Davis is projected to win, but there is a very real scenario in which Betsy Dirksen Londrigan ends up beating him.

FiveThirtyEight did not significantly change how their models make predictions between 2016 and this year. The data itself is treated in roughly the same way. But as we can see from these comparisons, the way that this data is presented can make a big difference in terms of how we interpret it. 

Will these efforts at better data visualization be enough to deter angry reactions to how the model correlates with actual election results? We’ll just have to tune in to the replies on Nate Silver’s twitter account tomorrow morning to find out… In the meantime, check out their House, Senate, and Governors  forecasts for yourself!

 

All screenshots taken from fivethirtyeight.com. Images of the 2016 models reflect the “Polls-only” forecast. Images of the 2018 models reflect the “Classic” forecasts as of the end of the day on November 5th 2018.