It Matters How We Open Knowledge: Open Access Week 2021

It’s that time of year again! Open Access Week is October 25-31, and the University of Illinois Library is excited to participate. Open Access Week is an international event where the academic and research community come together to learn about Open Access and to share that knowledge with others. The theme guiding this year’s discussion of open access will be “It Matters How We Open Knowledge: Building Structural Equity.”

These discussions will build on last year’s theme of “Open with Purpose: Taking Action to Build Structural Equity and Inclusion.” While last year’s theme was intended to get people thinking about the ways our current information systems marginalize and exclude, this year’s theme is focused on information equity as it relates to governance.

OA Week digital banner with theme name and date

Specifically, this year’s theme intentionally aligns with the recently released United Nations Educational, Scientific and Cultural Organization (UNESCO) recommendation on Open Science, which encompasses practices such as publishing open research, campaigning for open access, and generally making it easier to publish and communicate scientific knowledge.

Circulated in draft form following discussion by representatives of UNESCO’s 193 member countries, the recommendation powerfully articulates and centers the importance of equity in pursuing a future for scholarship that is open by default. As the first global standard-setting framework on Open Science, the UNESCO Recommendation will provide an important guide for governments around the world as they move from aspiration to the implementation of open research practices.

UNESCO Icon

While the University of Illinois is not hosting any formal events for open access, the Library encourages students, staff, and faculty to familiarize themselves with existing open access resources, including:

  • IDEALS: The Illinois Digital Environment for Access to Learning and Scholarship, collects, disseminates, and provides persistent and reliable access to the research and scholarship of faculty, staff, and students at Illinois. Once an article is deposited in IDEALS, it may be efficiently and effectively accessed by researchers around the world, free of charge.
  • Copyright: Scholarly Communication and Publishing offers workshops and consultation services on issues related to copyright. While the Library cannot offer legal advice, we can help you to identify information and issues you may want to consider in addressing your copyright question.
  • Illinois Open Publishing Network: The Illinois Open Publishing Network (IOPN) is a set of digital publishing initiatives that are hosted and coordinated at the University of Illinois at Urbana-Champaign Library. IOPN offers a suite of publishing services to members of the University of Illinois at Urbana-Champaign community and aims to facilitate the dissemination of high-quality, open access scholarly publications. IOPN services include infrastructure and support for publishing open access journals, monographs, born-digital projects that integrate multimedia and interactive content.

IOPN logo

For more information on how to support access at the University of Illinois, please reach out to the Scholarly Commons or the Scholarly Communication and Publishing unit. For more information about International Open Access Week, please visit www.openaccessweek.org. Get the latest updates on Open Access events on twitter using the hashtag #OAWeek.

Making Your Work Accessible Online

A person uses a braille reader

Unsplash @Sigmund

What is Web Accessibility?

Web Accessibility is the ability for individuals with vision, hearing, cognitive, and mobility disabilities to access web content online via their preferred methods.

WCAG defines web content as:

  • Natural information such as text, images, and sounds
  • Code or markup that defines structure, presentation, etc.

The essential components of web accessibility include:

  • Content
  • Web browsers
  • Assistive Technology
  • Users’ Experience
  • Developers
  • Authoring Tools
  • Evaluation Tools

Why It Matters

Individuals with disabilities not only use the web but also contribute to its functions. Website accessibility focuses on the needs of people with disabilities. However, by considering how to make information more available, interactive, and easy to use, we also make content more accessible for everyone.

A website that uses best practices for accessibility provides equitable access and opportunities to all its users, creates a great user experience, increases website interaction (multi-modal interaction), and enhances the overall usability of the site.

Introducing Web Content Accessibility Guidelines (WCAG)

The WCAG developed out of the World Wide Web Consortium’s (WC3) mission of developing international standards for the continued development of the web and the W3C Web Accessibility Initiative’s (WAI) mission to gather people from varying organizations to create guidelines and resources for people with disabilities.

The WCAG create “a single shared standard for web content accessibility that meets the needs of individuals, organizations, and governments” worldwide.

The WCAG has four accessibility principles, which forms the acronym, POUR:

  • Principle 1: Perceivable
    • The information and methods of interacting with hardware and software must be presented in ways that users can perceive. Examples include having text alternatives and using captioning in videos.
  • Principle 2: Operable
    • The hardware and software elements and navigation must be practical for users. Examples include ensuring keyboard accessibility and allowing users enough time to read and understand content.
  • Principle 3: Understandable
    • The information and the operation of hardware and software must be readable and understandable for users. Examples include ensuring that the text is easy to read and retaining the same style of program selections on different pages.
  • Principle 4: Robust
    • The content must have high compatibility so it can be interpreted by a variety of software used to access the web, including assistive technologies. Examples include parsing, that is, ensuring that html elements have start and end tags and screen readers.

Tips: Validate the accessibility of your website using these tools: Web Accessibility Evaluation Tools List

What has the University of Illinois Done to Meet these Standards?

University of Illinois web developers adhere to these web accessibility standards:

  • The Illinois Information Technology Accessibility Act (IITAA)
  • Section 508 of the Reauthorized Rehabilitation Act of 1998
  • The Web Content Accessibility Guidelines (WCAG)

The Main Library provides technological assistance via:

  • Hardware
    • Large Screen Monitors and Adjustable Tables
    • Clearview+ Magnification System
    • Braille Display
    • Tranquility Kits
  • Software
    • JAWS (Job Access With Speech)
    • Kurzweil 3000
    • ZoomText Magnifier/Reader
    • OpenBook
    • Dolphin EasyReader
    • OpenDyslexie

Please see Accessibility and Assistive Technology LibGuide for more information.

If you are interested in learning more about web accessibility and the WCAG, visit the WCAG website: https://www.w3.org/WAI/standards-guidelines/wcag/

What Are the Digital Humanities?

Introduction

As new technology has revolutionized the ways all fields gather information, scholars have integrated the use of digital software to enhance traditional models of research. While digital software may seem only relevant in scientific research, digital projects play a crucial role in disciplines not traditionally associated with computer science. One of the biggest digital initiatives actually takes place in fields such as English, History, Philosophy, and more in what is known as the digital humanities. The digital humanities are an innovative way to incorporate digital data and computer science within the confines of humanities-based research. Although some aspects of the digital humanities are exclusive to specific fields, most digital humanities projects are interdisciplinary in nature. Below are three general impacts that projects within the digital humanities have enhanced the approaches to humanities research for scholars in these fields.

Digital Access to Resources

Digital access is a way of taking items necessary for humanities research and creating a system where users can easily access these resources. This work involves digitizing physical items and formatting them to store them on a database that permits access to its contents. Since some of these databases may hold thousands or millions of items, digital humanists also work to find ways so that users may locate these specific items quickly and easily. Thus, digital access requires both the digitization of physical items and their storage on a database as well as creating a path for scholars to find them for research purposes.

Providing Tools to Enhance Interpretation of Data and Sources

The digital humanities can also change how we can interpret sources and other items used in the digital humanities. Data Visualization software, for example, helps simplify large, complex datasets and presents this data in ways more visually appealing. Likewise, text mining software uncovers trends through analyzing text that potentially saves hours or even days for digital humanists had they analyzed the text through analog methods. Finally, Geographic Information Systems (GIS) software allows for users working on humanities projects to create special types of maps that can both assist in visualizing and analyzing data. These software programs and more have dramatically transformed the ways digital humanists interpret and visualize their research.

Digital Publishing

The digital humanities have opened new opportunities for scholars to publish their work. In some cases, digital publishing is simply digitizing an article or item in print to expand the reach of a given publication to readers who may not have direct access to the physical version. Other times, some digital publishing initiatives publish research that is only accessible in a digital format. One benefit to digital publishing is that it opens more opportunities for scholars to publish their research and expands the audience for their research than just publishing in print. As a result, the digital humanities provide scholars more opportunities to publish their research while also expanding the reach of their publications.

How Can I Learn More About the Digital Humanities?

There are many ways to get involved both at the University of Illinois as well as around the globe. Here is just a list of a few examples that can help you get started on your own digital humanities project:

  • HathiTrust is a partnership through the Big Ten Academic Alliance that holds over 17 million items in its collection.
  • Internet Archive is a public, multimedia database that allows for open access to a wide range of materials.
  • The Scholarly Commons page on the digital humanities offers many of the tools used for data visualization, text mining, GIS software, and other resources that enhance analysis within a humanities project. There are also a couple of upcoming Savvy Researcher workshops that will go over how to use software used in the digital humanities
  • Sourcelab is an initiative through the History Department that works to publish and preserve digital history projects. Many other humanities fields have equivalents to Sourcelab that serves the specific needs of a given discipline.

Big Ten Academic Alliance Open Access Developments

Last month, the Big Ten Academic Alliance (BTAA) made a series of announcements regarding its support of Open Access (OA) initiatives across its member libraries. Open Access is the free, immediate, online availability of research articles coupled with the rights to use those articles fully in the digital environment. Put plainly, Open Access ensures that anyone, anywhere, can access and use information. By supporting these developments in OA, the BTAA aims to make information more accessible to the university community, to benefit scholars by eliminating paywalls to research, and to help researchers to publish their own work.

Big ten academic alliance logo

On July 19, the BTAA announced the finalization of a three-year collective agreement with the Open Library of Humanities (OLH), a charitable organization dedicated to publishing open access scholarship with no author-facing article processing charges. OLH publishes academic journals from across the humanities disciplines, as well as hosting its own multidisciplinary journal. This move was made possible thanks to the OLH Open Consortial Offer, an initiative that offers consortia, societies, networks and scholarly projects the opportunity to join the Open Library of Humanities Library Partnership Subsidy system as a bloc, enabling each institution to benefit from a discount. Through this agreement, the BTAA hopes to expand scholarly publishing opportunities available to its member libraries, including the University of Illinois.

Following the finalization of the OLH agreement, the BTAA announced on July 21 the finalization of a three-year collective action agreement with MIT Press that provides Direct to Open (D2O) access for all fifteen BTAA member libraries. Developed over two years with the support of the Arcadia Fund, D2O gives institutions the opportunity to harness collective action to support access to knowledge. As participating libraries, the Big Ten members will help open access to all new MIT Press scholarly monographs and edited collections from 2022. As a BTAA member, the University of Illinois will support the shifting publication of new MIT Press titles to open access. The agreement also gives the University of Illinois community access to MIT Press eBook backfiles that were not previously published open access.

By entering into these agreements, the BTAA aims to promote open access publishing across its member libraries. On how these initiatives will impact the University of Illinois scholarly community, Head of Scholarly Communication & Publishing Librarian Dan Tracy said:

“The Library’s support of OLH and MIT Press is a crucial investment in open access publishing infrastructure. The expansion of open access publishing is a great opportunity to increase the reach and impact of faculty research, but common models of funding open access through article processing charges makes it challenging for authors in the humanities and social sciences particularly to publish open access. The work of OLH to publish open access journals, and MIT Press to publish open access books, without any author fees while also providing high quality, peer reviewed scholarly publishing opportunities provides greater equity across disciplines.”

Since these announcements, the BTAA has continued to support open access initiatives among its member libraries. Most recently, the BTAA and the University of Michigan Press signed a three-year agreement on August 5 that provides multi-year support for the University of Michigan Press’ new open access model Fund to Mission. Based on principles of equity, justice, inclusion, and accessibility, Fund to Mission aims to transition upwards of 75% of the press’ monograph publications into open access resources by the end of 2023. This initiative demonstrates a move toward a more open, sustainable infrastructure for the humanities and social sciences, and is one of several programs that university presses are developing to expand the reach of their specialist publications. As part of this agreement, select BTAA members, University of Illinois included, will have greater access to significant portions of the University of Michigan’s backlist content.

The full release and more information about recent BTAA announcements can be found on the BTAA website. To learn more about Open Access efforts at the University of Illinois, visit our OA Guide.

Comparison: Human vs. Computer Transcription of an “It Takes a Campus” Episode

Providing transcripts of audio or video content is critical for making these experiences accessible to a wide variety of audiences, especially those who are deaf or hard of hearing. Even those with perfect hearing might prefer to skim over a transcript of text rather than listen to audio sometimes. However, often times the slowest part of the audio and video publishing process is the transcribing portion of the workflow. This was certainly true with the recent interview I did with Ted Underwood, which I conducted on March 2 but did not release until March 31. The majority of that time was spent transcribing the interview; editing and quality control were significantly less time consuming.

Theoretically, one way we could speed up this process is to have computers do it for us. Over the years I’ve had many people ask me whether automatic speech-to-text transcription is a viable alternative to human transcription in dealing with oral history or podcast transcription. The short answer to that question is: “sort of, but not really.”

Speech to text or speech recognition technology has come a long way particularly in recent years. Its performance has improved to the point where human users can give auditory commands to a virtual assistant such as Alexa, Siri, or Google Home, and the device usually gives an appropriate response to the person’s request. However, recognizing a simple command like “Remind me at 5 pm to transcribe the podcast” is not quite the same as correctly recognizing and transcribing a 30-minute interview. It has to handle differences between two speakers and lengthy blocks of text.

To see how good of a job the best speech recognition tools do today, I decided to have one of these tools attempt to transcribe the Ted Underwood podcast interview and compare it to the actual transcript I did by hand. The specific tool I selected was Amazon Transcribe, which is part of the Amazon Web Services (AWS) suite of tools. This service is considered one of the best options available and uses cloud computing to convert audio data to textual data, presumably like how Amazon’s Alexa works.

It’s important to note that Amazon Transcribe is not free, however, it only costs $0.0004 per second of text, so Ted Underwood’s interview only cost me 85 cents to transcribe. For more on Amazon Transcribe’s costs, see this page.

In any case, here is a comparison between my manual transcript vs. Amazon Transcribe. To begin, here is the intro to the podcast as spoken and later transcribed by me:

Ben Ostermeier: Hello and welcome back to another episode of “It Takes
a Campus.” My name is Ben, and I am currently a graduate assistant at
the Scholarly Commons, and today I am joined with Dr. Ted Underwood,
who is a professor at the iSchool here at the University of Illinois.
Dr. Underwood, welcome to the podcast and thank you for taking time
to talk to me today.

And here is Amazon Transcribe’s interpretation of that same section of audio, with changes highlighted:

Hello and welcome back to another episode of it takes a campus. My
name is Ben, and I am currently a graduate assistant at Scali Commons.
And today I'm joined with Dr Ted Underwood, who is a professor 
at the high school here at the University of Illinois. 
Dr. Underwood, welcome to the podcast. Thank you for taking 
time to talk to me today.

As you can see, Amazon Transcribe did a pretty good job, but there are some mistakes and changes from the transcript I hand wrote. It particularly had trouble with proper nouns like “Scholarly Commons” and “iSchool,” along with some minor issues like not putting a dot after “Dr” and missing an “and” conjunction in the last sentence.

Screenshot of text comparison between Amazon-generated and human-generated transcripts.

Screenshot of text comparison between Amazon-generated (left) and human-generated (right) transcripts of the podcast episode.

You can see the complete changes between the two transcripts at this link.

Please note that the raw text I received from Amazon Transcribe was not separated into paragraphs initially. I had to do that myself in order to make the comparison easier to see.

In general, Amazon Transcribe does a pretty good job in recognizing speech but makes a decent number of mistakes that require cleaning up afterwards. For me, I actually find it faster and less frustrating to transcribe by hand instead of correcting a ‘dirty’ transcript, but others may prefer the alternative. Additionally, in some cases an institution may have a very large number of untranscribed oral histories, for example, and if the choice is between having a dirty transcript vs. no transcript at all, a dirty transcript is naturally preferable.

Also, while I did not have time to do this, there are ways to train Amazon Transcribe to do a better job with your audio, particularly with proper nouns like “Scholarly Commons.” You can read more about it on the AWS blog.

That said, there is very much an art to transcription, and I’m not sure if computers will ever be able to totally replicate it. When transcribing, I often have to make judgement calls about whether to include aspects of speech like “um”s and “uh”s. People also tend to start a thought and then stop and say something else, so I have to decide whether to include “false starts” like these or not. All of these judgement calls can have a significant impact on how researchers interpret a text, and to me it is crucial that a human sensitive to their implications makes these decisions. This is especially critical when transcribing an oral history that involves a power imbalance between the interviewer and interviewee.

In any case, speech to text technology is becoming increasingly powerful, and there may come a day, perhaps very soon, when computers can do just as good of a job as humans. In the meantime, though, we will still need to rely in at least some human input to make sure transcripts are accurate.

Automated Live Captions for Virtual and In-Person Meetings

At the University of Illinois at Urbana-Champaign, we are starting to think about what life will look like with a return of in-person services, meetings, and events. Many of us are considering what lessons we want to keep from our time conducting these activities online to make the return to in-person as inclusive as possible.

Main library reading room

“Mainlibraryreadingroom.jpg.” C. E. Crane, licensed under a CC-BY 2.0 Attribution license.

One way to make your meetings and presentations accessible is the use of live, automated captions. Captions benefit those who are hard-of-hearing, those who prefer to read the captions while listening to help focus, people whose first language is not English, and others. Over the course of the last year, several online platforms have introduced or enhanced features that create live captions for both virtual and in-person meetings.

Live Captions for Virtual Meetings and Presentations

Most of the major virtual meeting platforms have implemented automated live captioning services.

Zoom

Zoom gives you the option using either live, automated captions or assigning someone to create manual captions. Zoom’s live transcriptions only support US English and can be affected by background noise, so they recommend using manual captioner to ensure you are meeting accessibility guidelines. You can also integrate a third-party captioning software if you prefer.

Microsoft Teams

MS Teams offers live captions in US English and includes some features that allow captions to be attributed to individual speakers. Their live captioning service automatically filters out profane language and is available on the mobile app.

Google Meet

Unlike Zoom and Teams, Google Meet offers live captions in French, German, Portuguese, and Spanish (both Latin America and Spain). This feature is also available on the Google Meet app for Android, iPhone, and iPad.

Slack

Slack currently does not offer live automated captions during meetings.

Icon of laptop open with four people in different qudrants representing an online meeting

“Meeting” by Nawicon from the Noun Project.

Live Captions for In-Person Presentations

After our meetings and presentations return to in-person, we can still incorporate live captions whenever possible to make our meetings more accessible. This works best when a single speaker is presenting to a group.

PowerPoint

PowerPoint’s live captioning feature allows your live presentation to be automatically transcribed and displayed on your presentation slides. The captions can be displayed in either the speaker’s native language or translated into other languages. Presenters can also adjust how the captions display on the screen.

Google Slides

The captioning feature in Google slides is limited to US English and works best with a single speaker. Captions can be turned on during the presentation but do now allow for the presenter to customize their appearance.

Icon of four figures around a table in front of a blamk presentation screen

“Meeting”. by IconforYou from the Noun Project.

As we return to some degree of normalcy, we can push ourselves to imagine creative ways to take the benefits of online gathering with us into the future. The inclusive practice we have adopted don’t need to just disappear, especially as technology and our ways of working continue to adapt.

Introductions: What is GIS, anyways?

This post is part of a series where we introduce you to the various topics that we cover in the Scholarly Commons. Maybe you’re new to the field or you’re just to the point where you’re just too afraid to ask… Fear not! We are here to take it back to the basics!

So, what is GIS, anyways?

Geographic Information Systems, or GIS as it is often referred to, is a way of gathering, maintaining, and analyzing data. GIS uses geography and spatial data to create visualizations using maps. This is a very useful way to analyze your data to identify and understand trends, relationships, and patterns in your data over a geographic region. Simply put, it is a way of visualizing data geographically and the key to GIS is in spatial data. In addition to spatial data, there is attribute data which is basically any other data as it relates to the spatial data. For example, if you were looking at the University of Illinois campus, the actual location of the buildings would be spatial data, while the type of building (i.e. an academic, laboratory, recreation, etc) would be attribute data. Using these two types of data together can allow researchers to explore and answer difficult questions.

While it can get more complex than that, since this is an introductions series, we won’t go into the fine details. If you want to learn more about GIS and the projects you can do with it, you can reach out to the Scholarly Common’s GIS Specialist, Wenjie Wang.

So, who uses GIS?

Anyone can use GIS! You can use maps to visualize your data to identify problems, monitor change, set priorities, and forecast fluctuations.

There are GIS technologies and applications that assist researchers in performing GIS. The Scholarly Commons has a wide range of GIS resources, including software that you can access from your own computer and a directory of geospatial data available throughout the web and University Library resources. 

If you’re interested in learning more about GIS application and software and how to apply it to your own projects you can fill out a consultation request formattend a Savvy Researcher WorkshopLive Chat with us on Ask a Librarian, or send us an email. We are always happy to help!

 

 

References

Dempsey, C. (2019, August 16). What is GIS? GIS Lounge. https://www.gislounge.com/what-is-gis/

What is GIS? | Geographic Information System Mapping Technology. (n.d.). Retrieved April 19, 2021, from https://www.esri.com/en-us/what-is-gis/overview

Introductions: What is Digital Scholarship, anyways?

This is the beginning of a new series where we introduce you to the various topics that we cover in the Scholarly Commons. Maybe you’re new to the field or you’re just to the point where you’re just too afraid to ask… Fear not! We are here to take it back to the basics!

What is digital scholarship, anyways?

Digital scholarship is an all-encompassing term and it can be used very broadly. Digital scholarship refers to the use of digital tools, methods, evidence, or any other digital materials to complete a scholarly project. So, if you are using digital means to construct, analyze, or present your research, you’re doing digital scholarship!

It seems really basic to say that digital scholarship is any project that uses digital means because nowadays, isn’t that every project? Yes and No. We use the term digital quite liberally…If you used Microsoft Word to just write your essay about a lab you did during class – that is not digital scholarship however if you used specialized software to analyze the results from a survey you used to gather data then you wrote about it in an essay that you then typed in Microsoft Word, then that is digital scholarship! If you then wanted to get this essay published and hosted in an online repository so that other researchers can find your essay, then that is digital scholarship too!

Many higher education institutions have digital scholarship centers at their campus that focus on providing specialized support for these types of projects. The Scholarly Commons is a digital scholarship space in the University Main Library! Digital scholarship centers are often pushing for new and innovative means of discovery. They have access to specialized software and hardware and provide a space for collaboration and consultations with subject experts that can help you achieve your project goals.

At the Scholarly Commons, we support a wide array of topics that support digital and data-driven scholarship that this series will cover in the future. We have established partners throughout the library and across the wider University campus to support students, staff, and faculty in their digital scholarship endeavors.

Here is a list of the digital scholarship service points we support:

You can find a list of all the software the Scholarly Commons has to support digital scholarship here and a list of the Scholarly Commons hardware here. If you’re interested in learning more about the foundations of digital scholarship follow along to our Introductions series as we got back to the basics.

As always, if you’re interested in learning more about digital scholarship and how to  support your own projects you can fill out a consultation request form, attend a Savvy Researcher Workshop, Live Chat with us on Ask a Librarian, or send us an email. We are always happy to help!

Simple NetInt: A New Data Visualization Tool from Illinois Assistant Professor, Juan Salamanca

Juan Salamanca Ph.D, Assistant Professor in the School of Art and Design at the University of Illinois Urbana-Champaign recently created a new data visualization tool called Simple NetInt. Though developed from a tool he created a few years ago, this tool brings entirely new opportunities to digital scholarship! This week we had the chance to talk to Juan about this new tool in data visualization. Here’s what he said…

Simple NetInt is a JavaScript version of NetInt, a Java-based node-link visualization prototype designed to support the visual discovery of patterns across large dataset by displaying disjoint clusters of vertices that could be filtered, zoomed in or drilled down interactively. The visualization strategy used in Simple NetInt is to place clustered nodes in independent 3D spaces and draw links between nodes across multiple spaces. The result is a simple graphic user interface that enables visual depth as an intuitive dimension for data exploration.

Simple NetInt InterfaceCheck out the Simple NetInt tool here!

In collaboration with Professor Eric Benson, Salamanca tested a prototype of Simple NetInt with a dataset about academic publications, episodes, and story locations of the Sci-Fi TV series Firefly. The tool shows a network of research relationships between these three sets of entities similar to a citation map but on a timeline following the episodes chronology.

What inspired you to create this new tool?

This tool is an extension of a prototype I built five years ago for the visualization of financial transactions between bank clients. It is a software to visualize networks based on the representation of entities and their relationships and nodes and edges. This new version is used for the visualization of a totally different dataset:  scholarly work published in papers, episodes of a TV Series, and the narrative of the series itself. So, the network representation portrays relationships between journal articles, episode scripts, and fictional characters. I am also using it to design a large mural for the Siebel Center for Design.

What are your hopes for the future use of this project?

The final goal of this project is to develop an augmented reality visualization of networks to be used in the field of digital humanities. This proof of concept shows that scholars in the humanities come across datasets with different dimensional systems that might not be compatible across them. For instance, a timeline of scholarly publications may encompass 10 or 15 years, but the content of what is been discussed in that body of work may encompass centuries of history. Therefore, these two different temporal dimensions need to be represented in such a way that helps scholars in their interpretations. I believe that an immersive visualization may drive new questions for researchers or convey new findings to the public.

What were the major challenges that came with creating this tool?

The major challenge was to find a way to represent three different systems of coordinates in the same space. The tool has a universal space that contains relative subspaces for each dataset loaded. So, the nodes instantiated from each dataset are positioned in their own coordinate system, which could be a timeline, a position relative to a map, or just clusters by proximities. But the edges that connect nodes jump from one coordinate system to the other. This creates the idea of a system of nested spaces that works well with few subspaces, but I am still figuring out what is the most intuitive way to navigate larger multidimensional spaces.

What are your own research interests and how does this project support those?

My research focuses on understanding how designed artifacts affect the viscosity of social action. What I do is to investigate how the design of artifacts facilitates or hinders the cooperation of collaboration between people. I use visual analytics methods to conduct my research so the analysis of networks is an essential tool. I have built several custom-made tools for the observation of the interaction between people and things, and this is one of them.

If you would like to learn more about Simple NetInt you can find contact information for Professor Juan Salamanca here and more information on his research!

If you’re interested in learning more about data visualizations for your own projects, check out our guide on visualizing your data, attend a Savvy Researcher Workshop, Live Chat with us on Ask a Librarian, or send us an email. We are always happy to help!

Happy Open Education Week 2021!

Every March, librarians around the world celebrate Open Education Week, a time to raise awareness of the need for and use of Open Educational Resources on our campuses. Many libraries are engaged in promoting these resources to faculty and administrators in order to help reduce the cost of course materials for students.

OEWeek 2021 Logo

“Open Education Week Logo.” OEWeek. https://www.openeducationweek.org/page/materials. Licensed under a CC-BY 4.0 license.

Open Educational Resources are learning materials that are published without copyright restrictions, meaning they can be freely distributed, reused, and modified. Faculty who assign Open Educational Resources in their classes help eliminate the barriers to academic success students can face when they cannot afford their course materials. The Florida Virtual Campus survey has demonstrated over several iterations of their survey how these costs negatively impact students – whether it’s dropping or failing a course, changing major, or struggling academically.

OpenStax is one of the most well-known publishers of OER and is often used by librarians as an example of high-quality, low-cost textbooks. While librarians often work as OER advocates on their campus, we are not always the ones publishing our own, original OER. This makes the publishing of Instruction in Libraries and Information Centers: An Introduction in July 2020 a unique and exciting accomplishment that will benefit Library and Information Science students for years to come.

Front cover of Instruction in Libraries by Saunder and Wong

This textbook, authored by Laura Saunders, Associate Professor of Library and Information Science at Simmons College and Melissa Wong, Adjunct Lecturer of Library and Information Sciences at UIUC, is freely available for students to read online, download, and print. The book is the first open access textbook to be published by Windsor and Downs press through IOPN, the University Library’s publishing unit. Other open access books available through the press include Sara Benson’s The Sweet Public Domain: Celebrating Copyright Expiration with the Honey Bunch Series.

Interested in the ways libraries are celebrating these accomplishments and bringing attention to the need to continue our advocacy? Check out the Twitter hashtag #OEWeek to join the conversation.