There’s been a Murder in SQL City!

by Libby Cave
Detective faces board with files, a map and pictures connected with red string.

If you are interested in data or relational databases, then you have heard of SQL. SQL, or Structured Query Language, is designed to handle structured data in order to assist in data query, data manipulation, data definition and data access control. It is a very user-friendly language to learn with a simple code structure and minimal use of special characters. Because of this, SQL is the industry standard for database management, and this is reflected in the job market as there is a strong demand for employees with SQL skills.  

Enter SQL Murder Mystery

In an effort to promote the learning of this valuable language, Knight Labs, a specialized subsidiary of Northwestern University, created SQL Murder Mystery. Combining the known benefits of gamification and the popularity of whodunit detective work, SQL Murder Mystery aims to help SQL beginners become familiar with the language and have some fun with a normally dry subject. Players take on the role of a gumshoe detective tasked with solving a murder. The problem is you have misplaced the crime scene report and you now must dive into the police department’s database to find the clues. For true beginners with no experience, the website provides a walkthrough to help get players started. More experienced learners can jump right in and practice their skills. 

I’m on the case!

I have no experience with SQL but I am interested in database design and information retrieval, so I knew it was high time that I learn the basics. As a fan of both games and detective stories, SQL Murder Mystery seemed like a great place to start. Since I am a true beginner, I started with the walkthrough. As promised on the website, this walkthrough did not give me a complete, exhaustive introduction to SQL as a language, but instead gave me the tools needed to get started on the case. SQL as a language, relational databases and Entity Relationship Diagrams (ERD) were briefly explained in an approachable manner. In the walk through, I was introduced to vital SQL functions like “Select:, “Where”, wildcards, and “Between”. My one issue with the game was in the joining tables section. I learned later that the reason I was having issues was due to the tables each having columns with the same title, which is apparently a foundational SQL feature. The guide did not explain that this could be an issue and I had to do some digging on my own to find out how to fix it. It seems like the walkthrough should have anticipated this issue and mentioned it. That aside, By the end of the walkthrough, I could join tables, search for partial information matches, and search within ranges. With some common sense, the database’s ERD, and the new SQL coding skills, I was able to solve the crime! If users weren’t challenged enough with that task, there is an additional challenge that suggests users find the accomplice while only using 2 queries.

User interface of SQL Murder Mystery
Example of SQL Murder Mystery user interface

The Verdict is In

I really loved this game! It served as a great introduction to a language I had never used before but still managed to be really engaging. It reminded me of those escape room mystery boxes like Hunt a Killer that has users solve puzzles to get to a larger final solution. Anyone who loves logic puzzles or mysteries will enjoy this game, even if they have no experience with or even interest in coding or databases.  If you have some free time and a desire to explore a new skill, you should absolutely give SQL Murder Mystery a try!

A Different Kind of Data Cleaning: Making Your Data Visualizations Accessible

Introduction: Why Does Accessibility Matter?

Data visualizations are a fast and effective manner for communicating information and are increasingly becoming a more popular way for researchers to share their data with a broad audience. Because of this rising importance, it is also necessary to ensure that data visualizations are accessible to everyone. Accessible data visualizations not only help an audience who may require a screen reader or other accessible tool to read a document but are also helpful to the creators of the data visualization as it brings their data to a much wider audience than through a non-accessible data visualization. This post will offer three tips on how you can make your visualization accessible!

TIP #1: Color Selection

One of the most important choices when making a data visualization are the colors used in the chart. One suggestion would be to use a color blindness simulator to check the colors in the data visualization and experiment to find the right amount of contrast between colors. Look at the example regarding the top ice cream flavors:

A data visualization about the top flavors of ice cream. Chocolate was the top flavor (40%) followed by Vanilla (30%), Strawberry (20%), and Other (10%).

At first glance, these colors may seem acceptable to use for this kind of data. But when ran through the colorblindness simulator, one of the results creates an accessibility concern:

This is the same pie chart above, but placed under a tritanopia color blindness lens. The colors used for strawberry and vanilla now look the exact same and blend into one another because of this, making it harder to discern the amount of space they take in the pie chart.

Although the colors contrasted well enough in the normal view, the color palettes used for the strawberry and vanilla categories look the same for those with tritanopia color blindness. The result is that these sections blend into one another and make it more difficult to distinguish their values. Most color palettes incorporated in current data visualization software are already designed to ensure the colors do not contrast, but it is still a good practice to check to ensure the colors do not blend in with one another!

TIP #2: Adding Alt Text

Since most data visualizations often appear as images in either published work or reports, alt text is a crucial need for accessibility purposes. Take the visualization below. If there was no alt text provided, then the visualization is meaningless to those who rely on alt text to read a given document. Alt text should be short and summarize the key takeaways from the data (there is no need to describe each individual point, but it should provide enough information to describe the trends occurring in the data).

This is a chart showing the population size of each town in a given county. Towns are labeled A-E and continue to grow in population size as they go down the alphabet (town A has 1,000 people while town E has 100,000 people).

TIP #3: Clearly Labeling Your Data

A simple but crucial component of any visualization is having clear labels on your data. Let’s look at two examples to see what makes having labels a vital aspect of any data visualization:

This is a chart for how much money was earned/spent at a lemonade stand by month. There is no y-axis labels to describe how much money is earned/spent and no key to discern the two lines that represent the money made and the money spent.

There is nothing in this graph that provides any useful information regarding the money earned or spent at the lemonade stand. How much money was earned or spent each month? What do these two lines represent? Now, look at a more clearly labeled version of the same data:

This is a cleaned version of the previous visualization regarding how much money was earned/spent at a lemonade stand. The addition of a Y-axis and key now show that more money was spent in January/February than earned, but then changes in March peaking in July, and then continuing to fall until December where more money is spent than earned again.

In adding a labeled Y-axis, we can now quantify the difference in distance between the two lines at any point and have a better idea of the money earned/spent in any given month. Furthermore, the addition of a key at the bottom of the visualization distinguishes the lines telling the audience what each represents. By clearly labeling the data, it is now in a position where audience members can interpret and analyze it properly.

Conclusion: Can My Data Still be Visually Appealing?

While it may appear that some of these recommendations detract from the creative designs of data visualizations, this is not the case at all. Designing a visually appealing data visualization is another crucial aspect of data visualization and should be heavily considered when creating one. Accessibility concerns, however, should have priority over the visual appeal of the data visualization. That said, accessibility in many respects encourages creativity in the design, as it makes the creator carefully consider how they want to present their data in a way that is both accessible and visually appealing. Thus, accessibility makes for a more creative and transmissive data visualization and will benefit everyone!

Open Education Week 2022

Open Education Week

Open Education Week brings awareness to the Open Educational Resources (OER) movement and to the how OER transforms teaching and learning for instructors and students alike.

What is OER?

OER refers to open access, openly licensed instructional materials that are used for teaching, learning or research.

Why is OER Important?

OER provides free resources to institutions, teachers, and students. When incorporated into the classroom, OER can:

  • Lower the cost of education for students
  • Reinforce open pedagogy
    • Allow educators to update and adapt materials to fit their needs
    • Encourages students’ interaction with, and creation of, educational materials
  • Encourage open knowledge dissemination

OER Incentive Grant

The University is giving faculty an incentive to adopt, adapt, or create OER for their courses instead of using expensive materials. The OER Incentive Grants will fund faculty teaching undergraduate courses. Instructors can submit applications in three tiers:

  • Tier 1: Adopt – incorporate an existing open textbook into a course
  • Tier 2 Adapt – incorporate portions of multiple existing open textbooks, along with other freely available educational resources, modifications of existing open education materials/textbooks, or development of new open education materials
  • Tier 3: Create – write new openly licensed textbooks

The preferred deadline to submit a proposal is March 11th. If you are interested in submitting a grant but cannot make this deadline, please reach out to Sara Benson at srbenson@illinois.edu. To learn more about this program see the webpage on the Faculty OER Incentive Program.

Upcoming OER Publication

In conjunction with Sara Benson, copyright librarian at UIUC, and the Illinois Open Publishing Network (IOPN), co-authors Christy Bazan, Brandi Barnes, Ryan Santens, and Emily Verone will publish an OER textbook, titled Drug Use and Misuse: A Community Health Perspective. This book explores drug use and abuse through the lens of community health and the impact of drug use and abuse on community health. Drug Use and Misuse is the third publication in IOPN’s Windsor & Downs Press OPN Textbook series. See the video below to learn more about the process of creating this textbook.

It Matters How We Open Knowledge: Open Access Week 2021

It’s that time of year again! Open Access Week is October 25-31, and the University of Illinois Library is excited to participate. Open Access Week is an international event where the academic and research community come together to learn about Open Access and to share that knowledge with others. The theme guiding this year’s discussion of open access will be “It Matters How We Open Knowledge: Building Structural Equity.”

These discussions will build on last year’s theme of “Open with Purpose: Taking Action to Build Structural Equity and Inclusion.” While last year’s theme was intended to get people thinking about the ways our current information systems marginalize and exclude, this year’s theme is focused on information equity as it relates to governance.

OA Week digital banner with theme name and date

Specifically, this year’s theme intentionally aligns with the recently released United Nations Educational, Scientific and Cultural Organization (UNESCO) recommendation on Open Science, which encompasses practices such as publishing open research, campaigning for open access, and generally making it easier to publish and communicate scientific knowledge.

Circulated in draft form following discussion by representatives of UNESCO’s 193 member countries, the recommendation powerfully articulates and centers the importance of equity in pursuing a future for scholarship that is open by default. As the first global standard-setting framework on Open Science, the UNESCO Recommendation will provide an important guide for governments around the world as they move from aspiration to the implementation of open research practices.

UNESCO Icon

While the University of Illinois is not hosting any formal events for open access, the Library encourages students, staff, and faculty to familiarize themselves with existing open access resources, including:

  • IDEALS: The Illinois Digital Environment for Access to Learning and Scholarship, collects, disseminates, and provides persistent and reliable access to the research and scholarship of faculty, staff, and students at Illinois. Once an article is deposited in IDEALS, it may be efficiently and effectively accessed by researchers around the world, free of charge.
  • Copyright: Scholarly Communication and Publishing offers workshops and consultation services on issues related to copyright. While the Library cannot offer legal advice, we can help you to identify information and issues you may want to consider in addressing your copyright question.
  • Illinois Open Publishing Network: The Illinois Open Publishing Network (IOPN) is a set of digital publishing initiatives that are hosted and coordinated at the University of Illinois at Urbana-Champaign Library. IOPN offers a suite of publishing services to members of the University of Illinois at Urbana-Champaign community and aims to facilitate the dissemination of high-quality, open access scholarly publications. IOPN services include infrastructure and support for publishing open access journals, monographs, born-digital projects that integrate multimedia and interactive content.

IOPN logo

For more information on how to support access at the University of Illinois, please reach out to the Scholarly Commons or the Scholarly Communication and Publishing unit. For more information about International Open Access Week, please visit www.openaccessweek.org. Get the latest updates on Open Access events on twitter using the hashtag #OAWeek.

Making Your Work Accessible Online

A person uses a braille reader

Unsplash @Sigmund

What is Web Accessibility?

Web Accessibility is the ability for individuals with vision, hearing, cognitive, and mobility disabilities to access web content online via their preferred methods.

WCAG defines web content as:

  • Natural information such as text, images, and sounds
  • Code or markup that defines structure, presentation, etc.

The essential components of web accessibility include:

  • Content
  • Web browsers
  • Assistive Technology
  • Users’ Experience
  • Developers
  • Authoring Tools
  • Evaluation Tools

Why It Matters

Individuals with disabilities not only use the web but also contribute to its functions. Website accessibility focuses on the needs of people with disabilities. However, by considering how to make information more available, interactive, and easy to use, we also make content more accessible for everyone.

A website that uses best practices for accessibility provides equitable access and opportunities to all its users, creates a great user experience, increases website interaction (multi-modal interaction), and enhances the overall usability of the site.

Introducing Web Content Accessibility Guidelines (WCAG)

The WCAG developed out of the World Wide Web Consortium’s (WC3) mission of developing international standards for the continued development of the web and the W3C Web Accessibility Initiative’s (WAI) mission to gather people from varying organizations to create guidelines and resources for people with disabilities.

The WCAG create “a single shared standard for web content accessibility that meets the needs of individuals, organizations, and governments” worldwide.

The WCAG has four accessibility principles, which forms the acronym, POUR:

  • Principle 1: Perceivable
    • The information and methods of interacting with hardware and software must be presented in ways that users can perceive. Examples include having text alternatives and using captioning in videos.
  • Principle 2: Operable
    • The hardware and software elements and navigation must be practical for users. Examples include ensuring keyboard accessibility and allowing users enough time to read and understand content.
  • Principle 3: Understandable
    • The information and the operation of hardware and software must be readable and understandable for users. Examples include ensuring that the text is easy to read and retaining the same style of program selections on different pages.
  • Principle 4: Robust
    • The content must have high compatibility so it can be interpreted by a variety of software used to access the web, including assistive technologies. Examples include parsing, that is, ensuring that html elements have start and end tags and screen readers.

Tips: Validate the accessibility of your website using these tools: Web Accessibility Evaluation Tools List

What has the University of Illinois Done to Meet these Standards?

University of Illinois web developers adhere to these web accessibility standards:

  • The Illinois Information Technology Accessibility Act (IITAA)
  • Section 508 of the Reauthorized Rehabilitation Act of 1998
  • The Web Content Accessibility Guidelines (WCAG)

The Main Library provides technological assistance via:

  • Hardware
    • Large Screen Monitors and Adjustable Tables
    • Clearview+ Magnification System
    • Braille Display
    • Tranquility Kits
  • Software
    • JAWS (Job Access With Speech)
    • Kurzweil 3000
    • ZoomText Magnifier/Reader
    • OpenBook
    • Dolphin EasyReader
    • OpenDyslexie

Please see Accessibility and Assistive Technology LibGuide for more information.

If you are interested in learning more about web accessibility and the WCAG, visit the WCAG website: https://www.w3.org/WAI/standards-guidelines/wcag/

What Are the Digital Humanities?

Introduction

As new technology has revolutionized the ways all fields gather information, scholars have integrated the use of digital software to enhance traditional models of research. While digital software may seem only relevant in scientific research, digital projects play a crucial role in disciplines not traditionally associated with computer science. One of the biggest digital initiatives actually takes place in fields such as English, History, Philosophy, and more in what is known as the digital humanities. The digital humanities are an innovative way to incorporate digital data and computer science within the confines of humanities-based research. Although some aspects of the digital humanities are exclusive to specific fields, most digital humanities projects are interdisciplinary in nature. Below are three general impacts that projects within the digital humanities have enhanced the approaches to humanities research for scholars in these fields.

Digital Access to Resources

Digital access is a way of taking items necessary for humanities research and creating a system where users can easily access these resources. This work involves digitizing physical items and formatting them to store them on a database that permits access to its contents. Since some of these databases may hold thousands or millions of items, digital humanists also work to find ways so that users may locate these specific items quickly and easily. Thus, digital access requires both the digitization of physical items and their storage on a database as well as creating a path for scholars to find them for research purposes.

Providing Tools to Enhance Interpretation of Data and Sources

The digital humanities can also change how we can interpret sources and other items used in the digital humanities. Data Visualization software, for example, helps simplify large, complex datasets and presents this data in ways more visually appealing. Likewise, text mining software uncovers trends through analyzing text that potentially saves hours or even days for digital humanists had they analyzed the text through analog methods. Finally, Geographic Information Systems (GIS) software allows for users working on humanities projects to create special types of maps that can both assist in visualizing and analyzing data. These software programs and more have dramatically transformed the ways digital humanists interpret and visualize their research.

Digital Publishing

The digital humanities have opened new opportunities for scholars to publish their work. In some cases, digital publishing is simply digitizing an article or item in print to expand the reach of a given publication to readers who may not have direct access to the physical version. Other times, some digital publishing initiatives publish research that is only accessible in a digital format. One benefit to digital publishing is that it opens more opportunities for scholars to publish their research and expands the audience for their research than just publishing in print. As a result, the digital humanities provide scholars more opportunities to publish their research while also expanding the reach of their publications.

How Can I Learn More About the Digital Humanities?

There are many ways to get involved both at the University of Illinois as well as around the globe. Here is just a list of a few examples that can help you get started on your own digital humanities project:

  • HathiTrust is a partnership through the Big Ten Academic Alliance that holds over 17 million items in its collection.
  • Internet Archive is a public, multimedia database that allows for open access to a wide range of materials.
  • The Scholarly Commons page on the digital humanities offers many of the tools used for data visualization, text mining, GIS software, and other resources that enhance analysis within a humanities project. There are also a couple of upcoming Savvy Researcher workshops that will go over how to use software used in the digital humanities
  • Sourcelab is an initiative through the History Department that works to publish and preserve digital history projects. Many other humanities fields have equivalents to Sourcelab that serves the specific needs of a given discipline.

Big Ten Academic Alliance Open Access Developments

Last month, the Big Ten Academic Alliance (BTAA) made a series of announcements regarding its support of Open Access (OA) initiatives across its member libraries. Open Access is the free, immediate, online availability of research articles coupled with the rights to use those articles fully in the digital environment. Put plainly, Open Access ensures that anyone, anywhere, can access and use information. By supporting these developments in OA, the BTAA aims to make information more accessible to the university community, to benefit scholars by eliminating paywalls to research, and to help researchers to publish their own work.

Big ten academic alliance logo

On July 19, the BTAA announced the finalization of a three-year collective agreement with the Open Library of Humanities (OLH), a charitable organization dedicated to publishing open access scholarship with no author-facing article processing charges. OLH publishes academic journals from across the humanities disciplines, as well as hosting its own multidisciplinary journal. This move was made possible thanks to the OLH Open Consortial Offer, an initiative that offers consortia, societies, networks and scholarly projects the opportunity to join the Open Library of Humanities Library Partnership Subsidy system as a bloc, enabling each institution to benefit from a discount. Through this agreement, the BTAA hopes to expand scholarly publishing opportunities available to its member libraries, including the University of Illinois.

Following the finalization of the OLH agreement, the BTAA announced on July 21 the finalization of a three-year collective action agreement with MIT Press that provides Direct to Open (D2O) access for all fifteen BTAA member libraries. Developed over two years with the support of the Arcadia Fund, D2O gives institutions the opportunity to harness collective action to support access to knowledge. As participating libraries, the Big Ten members will help open access to all new MIT Press scholarly monographs and edited collections from 2022. As a BTAA member, the University of Illinois will support the shifting publication of new MIT Press titles to open access. The agreement also gives the University of Illinois community access to MIT Press eBook backfiles that were not previously published open access.

By entering into these agreements, the BTAA aims to promote open access publishing across its member libraries. On how these initiatives will impact the University of Illinois scholarly community, Head of Scholarly Communication & Publishing Librarian Dan Tracy said:

“The Library’s support of OLH and MIT Press is a crucial investment in open access publishing infrastructure. The expansion of open access publishing is a great opportunity to increase the reach and impact of faculty research, but common models of funding open access through article processing charges makes it challenging for authors in the humanities and social sciences particularly to publish open access. The work of OLH to publish open access journals, and MIT Press to publish open access books, without any author fees while also providing high quality, peer reviewed scholarly publishing opportunities provides greater equity across disciplines.”

Since these announcements, the BTAA has continued to support open access initiatives among its member libraries. Most recently, the BTAA and the University of Michigan Press signed a three-year agreement on August 5 that provides multi-year support for the University of Michigan Press’ new open access model Fund to Mission. Based on principles of equity, justice, inclusion, and accessibility, Fund to Mission aims to transition upwards of 75% of the press’ monograph publications into open access resources by the end of 2023. This initiative demonstrates a move toward a more open, sustainable infrastructure for the humanities and social sciences, and is one of several programs that university presses are developing to expand the reach of their specialist publications. As part of this agreement, select BTAA members, University of Illinois included, will have greater access to significant portions of the University of Michigan’s backlist content.

The full release and more information about recent BTAA announcements can be found on the BTAA website. To learn more about Open Access efforts at the University of Illinois, visit our OA Guide.

Comparison: Human vs. Computer Transcription of an “It Takes a Campus” Episode

Providing transcripts of audio or video content is critical for making these experiences accessible to a wide variety of audiences, especially those who are deaf or hard of hearing. Even those with perfect hearing might prefer to skim over a transcript of text rather than listen to audio sometimes. However, often times the slowest part of the audio and video publishing process is the transcribing portion of the workflow. This was certainly true with the recent interview I did with Ted Underwood, which I conducted on March 2 but did not release until March 31. The majority of that time was spent transcribing the interview; editing and quality control were significantly less time consuming.

Theoretically, one way we could speed up this process is to have computers do it for us. Over the years I’ve had many people ask me whether automatic speech-to-text transcription is a viable alternative to human transcription in dealing with oral history or podcast transcription. The short answer to that question is: “sort of, but not really.”

Speech to text or speech recognition technology has come a long way particularly in recent years. Its performance has improved to the point where human users can give auditory commands to a virtual assistant such as Alexa, Siri, or Google Home, and the device usually gives an appropriate response to the person’s request. However, recognizing a simple command like “Remind me at 5 pm to transcribe the podcast” is not quite the same as correctly recognizing and transcribing a 30-minute interview. It has to handle differences between two speakers and lengthy blocks of text.

To see how good of a job the best speech recognition tools do today, I decided to have one of these tools attempt to transcribe the Ted Underwood podcast interview and compare it to the actual transcript I did by hand. The specific tool I selected was Amazon Transcribe, which is part of the Amazon Web Services (AWS) suite of tools. This service is considered one of the best options available and uses cloud computing to convert audio data to textual data, presumably like how Amazon’s Alexa works.

It’s important to note that Amazon Transcribe is not free, however, it only costs $0.0004 per second of text, so Ted Underwood’s interview only cost me 85 cents to transcribe. For more on Amazon Transcribe’s costs, see this page.

In any case, here is a comparison between my manual transcript vs. Amazon Transcribe. To begin, here is the intro to the podcast as spoken and later transcribed by me:

Ben Ostermeier: Hello and welcome back to another episode of “It Takes
a Campus.” My name is Ben, and I am currently a graduate assistant at
the Scholarly Commons, and today I am joined with Dr. Ted Underwood,
who is a professor at the iSchool here at the University of Illinois.
Dr. Underwood, welcome to the podcast and thank you for taking time
to talk to me today.

And here is Amazon Transcribe’s interpretation of that same section of audio, with changes highlighted:

Hello and welcome back to another episode of it takes a campus. My
name is Ben, and I am currently a graduate assistant at Scali Commons.
And today I'm joined with Dr Ted Underwood, who is a professor 
at the high school here at the University of Illinois. 
Dr. Underwood, welcome to the podcast. Thank you for taking 
time to talk to me today.

As you can see, Amazon Transcribe did a pretty good job, but there are some mistakes and changes from the transcript I hand wrote. It particularly had trouble with proper nouns like “Scholarly Commons” and “iSchool,” along with some minor issues like not putting a dot after “Dr” and missing an “and” conjunction in the last sentence.

Screenshot of text comparison between Amazon-generated and human-generated transcripts.

Screenshot of text comparison between Amazon-generated (left) and human-generated (right) transcripts of the podcast episode.

You can see the complete changes between the two transcripts at this link.

Please note that the raw text I received from Amazon Transcribe was not separated into paragraphs initially. I had to do that myself in order to make the comparison easier to see.

In general, Amazon Transcribe does a pretty good job in recognizing speech but makes a decent number of mistakes that require cleaning up afterwards. For me, I actually find it faster and less frustrating to transcribe by hand instead of correcting a ‘dirty’ transcript, but others may prefer the alternative. Additionally, in some cases an institution may have a very large number of untranscribed oral histories, for example, and if the choice is between having a dirty transcript vs. no transcript at all, a dirty transcript is naturally preferable.

Also, while I did not have time to do this, there are ways to train Amazon Transcribe to do a better job with your audio, particularly with proper nouns like “Scholarly Commons.” You can read more about it on the AWS blog.

That said, there is very much an art to transcription, and I’m not sure if computers will ever be able to totally replicate it. When transcribing, I often have to make judgement calls about whether to include aspects of speech like “um”s and “uh”s. People also tend to start a thought and then stop and say something else, so I have to decide whether to include “false starts” like these or not. All of these judgement calls can have a significant impact on how researchers interpret a text, and to me it is crucial that a human sensitive to their implications makes these decisions. This is especially critical when transcribing an oral history that involves a power imbalance between the interviewer and interviewee.

In any case, speech to text technology is becoming increasingly powerful, and there may come a day, perhaps very soon, when computers can do just as good of a job as humans. In the meantime, though, we will still need to rely in at least some human input to make sure transcripts are accurate.

Automated Live Captions for Virtual and In-Person Meetings

At the University of Illinois at Urbana-Champaign, we are starting to think about what life will look like with a return of in-person services, meetings, and events. Many of us are considering what lessons we want to keep from our time conducting these activities online to make the return to in-person as inclusive as possible.

Main library reading room

“Mainlibraryreadingroom.jpg.” C. E. Crane, licensed under a CC-BY 2.0 Attribution license.

One way to make your meetings and presentations accessible is the use of live, automated captions. Captions benefit those who are hard-of-hearing, those who prefer to read the captions while listening to help focus, people whose first language is not English, and others. Over the course of the last year, several online platforms have introduced or enhanced features that create live captions for both virtual and in-person meetings.

Live Captions for Virtual Meetings and Presentations

Most of the major virtual meeting platforms have implemented automated live captioning services.

Zoom

Zoom gives you the option using either live, automated captions or assigning someone to create manual captions. Zoom’s live transcriptions only support US English and can be affected by background noise, so they recommend using manual captioner to ensure you are meeting accessibility guidelines. You can also integrate a third-party captioning software if you prefer.

Microsoft Teams

MS Teams offers live captions in US English and includes some features that allow captions to be attributed to individual speakers. Their live captioning service automatically filters out profane language and is available on the mobile app.

Google Meet

Unlike Zoom and Teams, Google Meet offers live captions in French, German, Portuguese, and Spanish (both Latin America and Spain). This feature is also available on the Google Meet app for Android, iPhone, and iPad.

Slack

Slack currently does not offer live automated captions during meetings.

Icon of laptop open with four people in different qudrants representing an online meeting

“Meeting” by Nawicon from the Noun Project.

Live Captions for In-Person Presentations

After our meetings and presentations return to in-person, we can still incorporate live captions whenever possible to make our meetings more accessible. This works best when a single speaker is presenting to a group.

PowerPoint

PowerPoint’s live captioning feature allows your live presentation to be automatically transcribed and displayed on your presentation slides. The captions can be displayed in either the speaker’s native language or translated into other languages. Presenters can also adjust how the captions display on the screen.

Google Slides

The captioning feature in Google slides is limited to US English and works best with a single speaker. Captions can be turned on during the presentation but do now allow for the presenter to customize their appearance.

Icon of four figures around a table in front of a blamk presentation screen

“Meeting”. by IconforYou from the Noun Project.

As we return to some degree of normalcy, we can push ourselves to imagine creative ways to take the benefits of online gathering with us into the future. The inclusive practice we have adopted don’t need to just disappear, especially as technology and our ways of working continue to adapt.

Introductions: What is GIS, anyways?

This post is part of a series where we introduce you to the various topics that we cover in the Scholarly Commons. Maybe you’re new to the field or you’re just to the point where you’re just too afraid to ask… Fear not! We are here to take it back to the basics!

So, what is GIS, anyways?

Geographic Information Systems, or GIS as it is often referred to, is a way of gathering, maintaining, and analyzing data. GIS uses geography and spatial data to create visualizations using maps. This is a very useful way to analyze your data to identify and understand trends, relationships, and patterns in your data over a geographic region. Simply put, it is a way of visualizing data geographically and the key to GIS is in spatial data. In addition to spatial data, there is attribute data which is basically any other data as it relates to the spatial data. For example, if you were looking at the University of Illinois campus, the actual location of the buildings would be spatial data, while the type of building (i.e. an academic, laboratory, recreation, etc) would be attribute data. Using these two types of data together can allow researchers to explore and answer difficult questions.

While it can get more complex than that, since this is an introductions series, we won’t go into the fine details. If you want to learn more about GIS and the projects you can do with it, you can reach out to the Scholarly Common’s GIS Specialist, Wenjie Wang.

So, who uses GIS?

Anyone can use GIS! You can use maps to visualize your data to identify problems, monitor change, set priorities, and forecast fluctuations.

There are GIS technologies and applications that assist researchers in performing GIS. The Scholarly Commons has a wide range of GIS resources, including software that you can access from your own computer and a directory of geospatial data available throughout the web and University Library resources. 

If you’re interested in learning more about GIS application and software and how to apply it to your own projects you can fill out a consultation request formattend a Savvy Researcher WorkshopLive Chat with us on Ask a Librarian, or send us an email. We are always happy to help!

 

 

References

Dempsey, C. (2019, August 16). What is GIS? GIS Lounge. https://www.gislounge.com/what-is-gis/

What is GIS? | Geographic Information System Mapping Technology. (n.d.). Retrieved April 19, 2021, from https://www.esri.com/en-us/what-is-gis/overview