Public Domain and Creativity

This post was guest authored by Scholarly Communication & Publishing Graduate Assistant Nicole Moriah Rhodes.


The first American copyright law protected works for fourteen years after they were published and gave the copyright owner the opportunity to renew the copyright for another fourteen years. Few did, and works passed quickly into the public domain.

The copyright term is much longer now–it varies, but you, a human, will likely own many copyrights until 70 years after you die. Some people argue that a long copyright term increases the incentive to make creative work.

However, despite the longer term, statistical analysis of the number of copyright registrations through changes in population, economy, US law, and available technology doesn’t find that increasing copyright protection increases the number of copyrighted works. Raymond Shih Ray Ku, Jiayang Sun, & Yiying Fan (2009) find that the people advocating for broader copyright laws probably aren’t advocating for an increase in the amount of creative work: the best indicator of the number of new creative works among the variables in their study is population. Their data suggest that “Laws that reduce or otherwise limit copyright protection are actually more likely to increase the number of new works” (1673) than laws granting more protection.

Such a long period of copyright protection leaves a lot of content unusable to other creators. This comic about documentary filmmakers demonstrates how stringent copyright protections can prevent creative remixing and impede the accurate representation of the world. Work in the public domain can be shared freely, but our real lives are full of content protected by copyright, and people trying to make documentaries can be inhibited by copyright even on incidental work. When they want to use copyrighted material under the fair use doctrine, the threat of lawsuits can have a chilling effect.

Lawrence Lessig (2004) uses the phrase “Walt Disney creativity” to describe “a form of expression and genius that builds upon the culture around us and makes it something different” (24). Disney’s Cinderella, Disney’s live-action Cinderella, fanfiction, and The Lizzie Bennet Diaries could all be considered examples of Walt Disney creativity. But Disney had access to fairly recent work in his time. As Lessig writes:

“Thus, most of the content from the nineteenth century was free for Disney to use and build upon in 1928. It was free for anyone— whether connected or not, whether rich or not, whether approved or not—to use and build upon.

“From 1790 until 1978, the average copyright term was never more than thirty-two years, meaning that most culture just a generation and a half old was free for anyone to build upon without the permission of anyone else. Today’s equivalent would be for creative work from the 1960s and 1970s to now be free for the next Walt Disney to build upon without permission. Yet today, the public domain is presumptive only for content from before the Great Depression.” (24-25)

Michael Hart, the creator of Project Gutenberg and a longtime Urbana resident, viewed copyright law as impeding the abundance that technology could create, beginning with the very first copyright laws after the invention of the Gutenberg Press. While Ku, Sun, & Fan (2009) do find that copyright law helps create and protect both wealth and jobs and allows creators to be rewarded for their work rather than requiring sponsorship, they advocate for reducing copyright protection where it impedes distribution or creativity.

“Because copyright law works in the negative—effectively saying ‘do not use this work, do not copy this work, do not imitate this work’—we are not sending a message that society values the creation of new works. We are only sending the message that we should stay away from those works already created” (1722).

Creative Commons is one venture designed to allow creators to share their work for other creators’ use while preserving the level of protection they choose. However, the default is still a system that restricts access to cultural works past the time when the creator might care, and can even keep works from being preserved so they will be usable when they enter the public domain. Creators should be able to benefit from the work they create, but increasing protections does not necessarily increase those benefits. Excessive copyright terms keep us from being able to discuss and rethink our common culture.

Copyright as a Tool for Censorship

This post was guest authored by Scholarly Communication & Publishing Graduate Assistant Nicole Moriah Rhodes.


Copyright should be used to encourage speech and not to silence it. The stories below demonstrate that copyright can be used to limit the rights of technology users and censor criticism.

“In practical terms, the DMCA legalized technical controls on access to electronic works; it renders obsolete traditional rules for reading and sharing print materials and, simultaneously, enables content owners to implement a pay-per-use system that controls who has access, when, how much and from where. So, for instance, you can lend a paperback to friends, but you aren’t allowed to do the same thing with an electronic book.”

“The database shows that Ares Rights has filed at least 186 complaints since 2011, with 87 made on behalf of politicians, political parties, state media, and state agencies in the Americas.” (CPJ)

“They were received by political commentators who used images of Correa, transmitted on Ecuadoran public television, in videos uploaded to YouTube, in order to make visible the resistance of local communities to the onslaught of mining communities in the country’s inland provinces. The same thing happened with videos that used stock footage to illustrate the inconsistencies of the President’s statements together with videos of protests against the exploitation of Yasuní national park, and images of repression against students.” (Derechos Digitales)

  • Electronic Frontier Foundation: To be eligible under the DMCA’s safe harbor provisions, companies must comply with legitimate takedown notices. But many hosts end up taking down content that can be legally shared. Copyright takedown notices can be used to hassle critics. Punishing bogus claims is difficult, and the damages for failing to comply can be severe.

“According to the latest numbers, Twitter does not comply with nearly 1 in 4 takedown notices it receives; Wikimedia complies with less than half; and WordPress complies with less than two-thirds. Each organization explains in its report that the notices with which they don’t comply are either incomplete or abusive.”

Closed Doors or Open Access?: Envisioning the Future of the United States Copyright Office

Copyright Librarian Sara Benson

It’s Copyright Week! For today’s theme of “transparency”, Copyright Librarian Sara Benson discusses her thoughts on the Copyright Office activities to review Section 108.


In 2005, the Copyright Office, under the guidance of the Register of Copyrights at the time, Mary Beth Peters, called for a Study Group to convene and review possible amendments to Section 108. A follow up meeting was held in 2012. These meetings were not unusual, but what followed them, was both strange and unsettling.

The procedures after the Study Group, which took place in the summer of 2016 under the guidance of Maria Pallante, were unusual in that they took place in face-to-face meetings between concerned citizens and members of the Copyright Office rather than in a call for online communications between citizens and the Office. On the one hand, this gave the members of the Office a chance to engage in a dialogue with the concerned citizens. On the other, it meant that generally only those with the resources to travel to Washington, D.C. were privileged with the ability to engage with the members of the Office. However, the Office did note that it would engage in telephone conversations, if necessary. In any event, none of these conversations were ever made public.

At that time, it seemed that the Copyright Office was making an intentional move away from a public debate about copyright to a cloistered room with a privileged few. In my view, that move was undemocratic and should be discouraged in the future. Indeed, although the Copyright Office did publish a list of individuals and organizations it met with to discuss Section 108, but the actual subject and content of those discussions remains a mystery.

Notably, shortly after taking office as the new Librarian of Congress, Dr. Carla Hayden removed Maria Pallante from her position as Register of Copyrights. Does this signal a move away from the process that was undertaken to review Section 108? Likely it does, as Librarian of Congress Dr. Hayden has recently taken further steps towards listening to the views of the multitude by openly polling the public about what we would like to see in the next Register of Copyrights.

This is an exciting time to engage with the Copyright Office under Dr. Hayden’s leadership. I encourage everyone reading this essay to add your voice to the ongoing discussions about the changes to the Office, including the selection of the new Register of Copyrights and beyond.

(Baseball) Bibliometrics: The Final Inning

Man throwing a baseball.

This post was guest authored by Scholarly Communication and Publishing Graduate Assistant Paige Kuester. This is the third part of a three-part series. Read Part 1 and Part 2.


We’re almost there! We’ve just got to go over some of the newest ways of measuring impact, and then we can all go home with that win.

Altmetrics

This is one of the newer types of bibliometrics. Instead of traditional methods of just including citations to gain an understanding of an author’s work, this “alternative metric” takes into account different ways that people interact with articles, like bookmarking them, blogging about them, tweeting, and so much more in addition to regular citations.

Different companies have different ways that they establish their scores, but here are a few that provide this service, including the company Altmetric, ImpactStory, and Plum Analytics.

Of course, there is criticism for this method also, because it does not fall strictly within the scholarly realm, and some article topics are not inherently tweetable. The scores should not be used as the sole judge of the impact of an article because there are always various factors involved. Obviously, you can also game this metric and the h-index by talking and citing your own articles, and getting your friends to do it, too. However, it is one of the new ways that these measures are trying to keep up with the technological times, as papers are more widely available online and so may have a wider audience impact than previously.

I suppose this is a bit like picking your favorite baseball player. While altmetrics are clearly less subjective than that, there still are similarities. Other bibliometrics measures only look at how other scholars use the work, which would be just like only allowing other players on your team pick your favorite player. They probably have good judgement about the ability of the player, but they are not looking at outside factors, like how the player behaves on twitter.

This points to another downside of altmetrics, that a high score only indicates a high level of attention towards an article, though this could be negative press, just like players who get a lot of news coverage when they commit crimes. Nevertheless, altmetrics does not just seem to be a trend because scholars do want to see the impact of their articles outside of strict academia, but there may still be some wrinkles to iron out.

We have covered almost all of the bases now.

Now we enter the realm of proposed metrics.

S-index

The s-index, otherwise known as the self-citation index, is exactly what is sounds like. Flatt, Blasimme, and Vayena propose this in a 2017 paper, not to replace other indices, but just to be included with the h-index so that others can get an idea as to whether an author’s h-index is self-inflated. The authors present evidence that for every self-citation, that article tends to get an average of 2-3 more citations than they would have otherwise. Arguments against this are proposed by Phil Davis from the Scholarly Kitchen. He specifically points out that not all self-citations are bad, some are necessary if the field is small and/or the author is building on their own previous research. It is not yet clear if this will catch on or not.

One way to look at this would be if baseball players only hit the ball when other players were on base and about to score. It could be a specific strategy that the player only has a successful at-bat when they have a chance to improve their RBI. But who wouldn’t take that? It’s good for them, and it is good for other players who only get to score, or get exposed to home plate because of that player. Or it could be just the situation that the player is in.

Ok, not the best example, but I promise that’s it. We made it to the end!

Not all of the metrics here are perfect. They analyze different aspects of an article or an author’s impact and so should be used in conjunction with each other. As technology opens new doors to sharing work and assessing impact, we will probably be seeing new measures and new ways to measure old metrics. Just like baseball, there is not one single number that can tell you everything you need to know, but things are improving and changing every day.

Now you know enough about the metrics of scholarly publishing to score a home run.

Or something like that.


Sources:

Allard, R. J. (2017). Measuring Your Research Impact. Uniformed Services University. Retrieved September 21, 2017, from http://usuhs.libguides.com/c.php?g=184957&p=2506307

Davis, P. (2017, September 13). Do We Need a Self-Citation Index? Scholarly Kitchen. Retrieved September 21, 2017, from https://scholarlykitchen.sspnet.org/2017/09/13/need-self-citation-index/.

De Groot, S. (2017). Measuring Your Impact: Impact Factor, Citation Analysis, and other Metrics. UIC University Library. Retrieved September 21, 2017, from http://researchguides.uic.edu/if/impact

Flatt, J. W., Blasimme, A., & Vayena, E. (2017). Improving the Measurement of Scientific Success by Reporting a Self-Citation Index. Publications 5(3). Retrieved September 21, 2017, from http://www.mdpi.com/2304-6775/5/3/20/htm

Garfield, E. (2006). The History and Meaning of the Journal Impact Factor. JAMA 295(1). Retrieved September 21, 2017, from http://garfield.library.upenn.edu/papers/jamajif2006.pdf

Reuter, A. (2017, September 11). Baseball Standing Explained. Livestrong.com. Retrieved September 21, 2017, from http://www.livestrong.com/article/337911-baseball-standings-explained/

(Baseball) Bibliometrics: Calculating the Scoreboard

A small stack of baseballs, a helmet, and a baseball bat resting in the sand near a base.

This post was guest authored by Scholarly Communication and Publishing Graduate Assistant Paige Kuester. This is the second part of a three-part series. Read Part 1 here.


In our last post, we discussed what makes a journal the best team for a scholarly player (sort of). Today, we are looking at scores that are used to directly measure the impact of scholarly articles and the authors themselves.

H-index

Now this score is a bit trickier to calculate. But first, it’s probably best to explain what it is and what it does. H-index focuses on a specific researcher’s own output, in both the form of their most cited papers and also using the number of citations of their work that others have used. Yeah, this is a curve ball.

Now if we were really going to spend an afternoon at the ballpark learning about scholarly measurements, then we would go into the nitty gritty of how to figure out the most cited papers, and also how to actually figure out an h-index. But in simple terms, you need to list the number of publications with the most citations in descending order. Next, you go down the list until the number of citations is no longer greater than or equal to its position in the list. The last citation that is greater than or equal to its position in the list is the h-index. Check out this Waterloo library guide for an example.

Otherwise, you can also just look it up. The scores might vary between websites because of the differences in their content, but Google Scholar, Web of Science, and Scopus all give an h-index.

If none of this made sense, here’s a plug for the Wikipedia page that informed my basic understanding.

There is not a metric in baseball that’s like this. Maybe if our baseball team had a starting line up where the players with the most home runs started and went down the order in descending number of home runs, but cut off when the the lineup reached the last player that had a greater or equal number of home runs as the position that they were in? There is more strategy than that to batting order, so that is clearly not how it works, but you knew coming into this that this was going to be a stretched metaphor, anyway.

So what’s next?

G-index and i10-index

Both of these indices are not as widely used as the h-index.

The g-index is supposed to be an updated version of the h-index that places more value on highly cited articles.

i10 is only used on Google Scholar, and can be remembered by its name: it is the number of articles that an author has that have 10 citations or more each.

Okay, I think we’ve lost focus on the game, but we will come back to it in the next post.

Don’t worry, we’re in the seventh inning stretch. The game is about to get a whole lot more exciting, but I promise we won’t go into extra innings.

(Baseball) Bibliometrics Broken Down: A Series

A box of baseballs.This post was guest authored by Scholarly Communication and Publishing Graduate Assistant Paige Kuester. This is the first part of a three-part series.


No matter what game, everyone wants to be the best. Play for the best team, have the highest score, whatever. The game of research is no different. Now, I don’t mean to suggest that research and publishing should not be taken seriously by calling it a game, but there are still high scores involved that may be the deciding factor in the end result, which could be tenure or a higher paycheck or just negotiating power. You have probably heard of some of these scores, like the h-index or altmetrics. Even if you know what they mean, you might not know their significance or how they are calculated. And if you do know all of that, your time might be better spent elsewhere, unless you enjoy a super-stretched sports metaphor.

Yes, to further extend this game metaphor, we’re going to spend an afternoon at the ballpark. I’m visualizing Wrigley, but we can go wherever your favorite team plays, as long as it’s a Major League team. I know that I might be losing you at this point, and I might get lost in this imperfect metaphor myself, but if we make it through, there’s sure to be a win at the end.

In this game, our scholarly authors (professors) are our players (professionals). This could be humorous, but don’t laugh yet, because these scholars are playing a serious game. Even though getting on the starting line up does not guarantee a spot later in the season, I am going to equate that with gaining tenure for professors, as they are both goals that take hard work and dedication to achieve.

Journal Impact Factor

In order to have a good career, being on a highly ranked team is an automatic boost. They’re usually good for a reason, and fans will think that you must be good if you started off on such a prestigious team.

Picking a journal to publish in is a similar process, at least for the sake of this argument. While journals don’t go out and recruit, they are ranked in different ways, just like baseball teams. One way is through journal impact factor, which ranks the journals based on the average number of citations that a typical article has had in the last two years.

The formula works like this: take the number of cited articles from the journal that is in question during a two year period that were indexed during the following. Next, find out how many articles there were that were published and citable during that same two-year time period. Divide the first number by the second number, and you’ve got journal impact. This is formula is actually easier than figuring out the top ranked baseball teams in terms of math, but if you are really up for a challenge, you can try that, too.

If you didn’t get that math, that’s just fine, because there are websites that do it for you. Journal Citation Reports puts out the scores every year, and, as in most sports, the higher the better.

Originally, Impact Factor was not supposed to be used to judge how good an author or an article was was, but this is one way that many judge those authors now. If you can play for a good team, if you can get your article published in a highly ranked journal, you must be good, right?

Well, not everyone thinks that this is a representative way to measure academic impact, so there are other specific measures for the players and their articles, which will be discussed in the next post. Don’t worry, we’re just getting started.

Open Access Button v. Unpaywall: Is there a Winner?

This post was guest authored by Scholarly Communication and Publishing Graduate Assistant Paige Kuester.


A few months back, the Commons Knowledge blog featured a post about a new feature from Impactstory called “Unpaywall.” Read that article here. This is still a relatively new tool that aims to find open access versions of articles if they are available. You can click on the lock that shows up on an article’s page if it is green or gold, and Unpaywall will take you to an OA version of that article. If only a grey lock shows up, then there is no OA version of that article that this feature can find.

Similarly, the Open Access Button’s goal is to get you past paywalls. This is an older extension than Unpaywall, but is still being updated. This one works by bookmarking the button, and once you happen upon a paywalled article, you click on that bookmark. It also has a feature for when the article is not available: emailing the authors directly. The authors are then encouraged to deposit their articles in a repository, and either send a link to that or send the article directly to OAB so that they can upload it to a repository. Of course, if the author’s rights contract does not allow them to do this, then they can decline. OAB is also working with interlibrary loan departments in order to utilize this tool in those systems, which is supposed to eventually reduce the cost of sending articles between libraries.

I decided to test out the Open Access Button in order to write a fantastic blog post about it and how it compares to Unpaywall, and honestly, I came out a bit disappointed.

Maybe I just picked the wrong articles or topic to search for, or I’m just unskilled, but I had little success in my quest.

My first step was to install OAB, which was easy to do: I just dragged the button to my bookmarks for it to chill there until I needed it.

I used Google Scholar to search for an article that I did not have access to through the University. We do have a lot of articles available, but I did manage to pin one down that I could not get the full text for.

The Google Scholar results.

So I went to the page.

And opened my bookmarks to click on the

Open Access Button.

A screenshot of the bookmark for Open Access Button.

And then it loaded. For quite a while.

A screenshot of the loading screen.

And then…

A screenshot of how to request an article.

The article wasn’t available. But it gave me the option to write a note to the author to request it, like I mentioned above. Awesome. I wrote my note, but when I went to send it off, I arrived at another page asking me to supply the author’s email and the DOI of the article.

Screenshot of the website asking for a DOI.

An unexpected twist.

Okay, fine. So I searched and I searched for the first author but to no avail. I did, however, find the second author’s email, so I put that in the box. Check.

Next, the DOI. I searched and I searched and I looked up how to find an article’s DOI. Well, my article was from 1992 so the reason I couldn’t find one was probably because it didn’t have one. There was no option for that, so what next?

I installed Unpaywall to see if I would have more success that way. First, I had to switch from Safari to Chrome because Unpaywall only works on a couple of browsers. It was also easy to install, but I could not get the lock to show up in any color on the page, which is something that has happened to me many times since, also.

I ended up interlibrary loaning that article.

Additional experiences include OAB saying that I had access to an article, but sending me to an institutional repository that only members of that school could access. Unpaywall was more truthful with this one, showing me a grey lock. Another article let me send a message to the author in which they had thankfully found the author’s emails themselves, but I never heard a response back. Unpaywall would not show me any type of lock for this one, not even grey.

Both of these applications are still rather new, and there are still barriers to open access that need to be crossed. I will continue to try and implement these when I come across an article that I don’t have access to because supporting open access is important, but honestly, interlibrary loan was much more helpful to me during this venture.

Where Does Sci-Hub Fit In?

A circular and messy pile of books and papers.

This post was guest authored by Scholarly Communication and Publishing Graduate Assistant Paige Kuester.


Open access is not as simple as it may seem. In addition to conflicting definitions of open access itself, there are many different kinds, which may or may not follow the definitions previously put forth. There are three basic types that scholars discuss: gold, green, and hybrid, which are defined in this LibGuide.

There are also the colors that authors utilize to describe a category that does not fall under the three listed above, including but not limited to: bronze, diamond/platinum, and blue, white, and yellow.

And then there’s a whole category, black, just for Sci-Hub.

Okay, it’s not just for Sci-Hub, it also includes other platforms like ResearchGate and the like, where articles are freely shared by authors, but mainly, it’s for Sci-Hub.

Now keep in mind that most of these terms and definitions are up for debate, so take it all with a grain of salt.

The first question is: is Sci-Hub even open access?

If we are defining OA as freely available, then the answer would probably be “yes.” However, if we are defining OA as “legally” and freely available, then probably not. It does not following licensing laws, it is often unavailable, and the content is usually from subscribed entities meaning that someone is still paying somewhere, according to this article by Angela Cochran, of The Scholarly Kitchen.

Actually, the real first question is: What is Sci-Hub?

Sci-Hub is a website that was started in 2011 by Alexandra Elbakyan, a then-Kazakhstani graduate student who was tired of facing paywalls for articles that she could not get access to (which is something we can all relate to, honestly). So she created a way around it with Sci-Hub, which grabs articles behind institutional and publisher paywalls and makes them freely available. If it does not already have an article, it will retrieve it for you and make it accessible to others.

This, of course, has varying consequences across the board.

So who is it hurting?

Obviously, publishers like Elsevier don’t like it. They aren’t getting paid for the articles that they provide access to. In fact, they have already sued Elbakyan and won, which caused the website to shut down temporarily, until it popped back up under a different domain. This is an ongoing battle.

Even open access Publishers may be harmed in the process, says Cochran again. Though open access articles are already openly available, open access platforms traditionally also informs readers of what they can do with the work, like reuse, revise, retain, remix, and redistribute. This information is valuable to both the reader and the publisher, as the reader knows the rights regarding the work, and the publisher does not get this work used unfairly. This is lost on Sci-Hub. Additionally, OA publishers lose income by not keeping people on their sites to buy other products or services, it hides the real costs of OA publishing, and Sci-Hub does not give researchers the full picture of the article, just the text itself, no comments or retractions (or stated rights) attached.

Authors and researchers seem to be stuck in the middle. They cannot get an accurate picture of their article’s citation impact because Sci-Hub does not provide download counts for the authors, and most reputable citation indices would not calculate Sci-Hub downloads into them, anyways. However, as many of the main users in the US appear to be around college campuses, in all likelihood, there are researchers who are accessing articles this way, if for nothing else than convenience.

Similarly, students are still utilizing this site even if their institutions do have access to the articles. This is true even when the articles are open access, which makes it very clear that part of the appeal is convenience–not having to log in using credentials, for example.

What next?

This is the trickiest question of all.

There are a lot of opinions about Sci-Hub, but there are not many answers. If you are for open access, then the best way to reduce the threat of Sci-Hub against open access is to publish and access articles through those OA routes. The OA model can’t sustain itself if it does not have support. But if the knowledge needed is not accessible through OA means, then that is another question entirely. Librarians are torn on this issue, and time will tell how the publishers come out in this legally. However, it is very unlikely that Sci-Hub, or sites like it, will go away anytime soon.


Sources:

Björk, Bo-Christer. (2017, February 7). Gold, Green, and Black Access. Learned Publishing. Retrieved from http://onlinelibrary.wiley.com/doi/10.1002/leap.1096/full

Bohannon, John. (2016, April 28). Who’s Downloading Pirate Papers? Everyone. Science. Retrieved from http://www.sciencemag.org/news/2016/04/whos-downloading-pirated-papers-everyone

Cochran, Angela. (2017, June 6). Are Open Access Journals Immune from Piracy? The Scholarly Kitchen. Retrieved from https://scholarlykitchen.sspnet.org/2017/06/06/open-access-journals-immune-piracy/

Geffert, Bryn. (2016, September 4). Piracy Fills a Publishing Need. The Chronicle of Higher Education. Retrieved from http://www.chronicle.com/article/Piracy-Fills-a-Publishing-Need/237651

McKenzie, Lindsay. (2017, July 27) Sci-Hub’s Pirated Papers So Big, Subscription Journals Are Doomed, Data Analyst Suggests. Science. Retrieved from http://www.sciencemag.org/news/2017/07/sci-hub-s-cache-pirated-papers-so-big-subscription-journals-are-doomed-data-analyst

Ruff, Corinne. (2016, February 8). Librarians Find Themselves Caught Between Journal Pirates and Publishers. The Chronicle of Higher Education. Retrieved from http://www.chronicle.com/article/Librarians-Find-Themselves/235353

Waddell, Kaveh. (2016, February 9). The Research Pirates of the Dark Web. The Atlantic. Retrieved from https://www.theatlantic.com/technology/archive/2016/02/the-research-pirates-of-the-dark-web/461829/

Open Educational Resources: Who’s Paying?

A stack of books.

This post was guest authored by Scholarly Communication and Publishing Graduate Assistant Paige Kuester.


Who wants free textbooks? If you’re a student, you probably just jumped out of your seat, depending on how much you have spent on books during your college career. According to an article in The Capital Times, one study has shown that the majority of students have not bought a textbook for a course because of its high price.

If you’re not a student, and especially if you’re a faculty member, you’re probably thinking, “What’s the catch?” You know that everything has its price, and in this case, you’re right.

So what are we talking about?

According to the article “Breaking free: To save students money, colleges are looking to the Open Educational Resources movement,” there is a trend among higher ed to provide open access resources to students instead of requiring traditional textbooks. Though the article cites that during 2015 and 2016, only 5.3 percent of courses across the country used open education resources, this is likely to increase in the coming years.

Open educational resources are just what they sound like: books are other items whose copyright makes them available online openly for educational purposes. Since books and materials are open, they can be shared between different institutions and updated more easily than a physical textbook. They can also be reused, revised, and remixed with other material to suit a professor’s needs.

But someone has to pay, right?

Right. In the case of the University of Wisconsin-Madison, the school that is the focus of the article, the burden falls on the professors and instructors. Kristopher Olds, a professor of geography featured in the article, seized the opportunity to create an open textbook when it was presented with him, but it paying for it by patching together small grants, sabbatical funds, and other resources, and volunteering some of his own time. He feels his effort is worth it; however, after realizing that his students were not buying the expensive book he was assigning or were getting outdated information from older textbooks.

Surprisingly, Olds does not say that funding is one of the main barriers to institutions and professors implementing OER, but actually, awareness about OER and how to use them are bigger problems. However, the landscape is changing as knowledge about this type of resource spreads.

Here at the University of Illinois, we are encouraging professors and instructors to look into this facet of teaching. The University has just joined the Open Textbook Network, but data has not yet been gathered about its implementation on campus. Over the next few years, the library will be putting out more initiatives for OER as a part of joining the OTN. The Office of Information Literacy has put out a guide for helping instructors understand what it OER, how to use it, and how to find resources. Learn more at the Open Educational Resources LibGuide.

Schneider, P. (2017, August 9). Breaking free: To save students colleges are looking to the Open Educational Resource movement. The Cap Times. Retrieved from http://host.madison.com/ct/news/local/education/university/breaking-free-to-save-students-money-colleges-are-looking-to/article_eebc0888-2f1f-5faf-ace3-6264b52b8512.html

Open Access Week at the University of Illinois Library

It’s that time of year again! Open Access Week is October 23-27, and the University of Illinois Library is excited to participate. Open Access Week is an international event where the academic and research community come together to learn about Open Access and to share that knowledge with others. In its eighth year, the U of I Library has a great week of events planned!

  • Monday: Workshop: “A Crash Course in Open Access”, 12-1 PM, 314 Main Library
  • Tuesday: Workshop: Open Access Publishing and You, 12-1 PM, 314 Main Library
  • Wednesday: Workshop: Managing Your Copyright and Author’s Rights, 12-1 PM, 314 Main Library
  • Thursday: Scholarly Communication Interest Group Kickoff meeting, 12-1 PM, 106 Main Library
  • Friday: Workshop: Sharing Your Research with ORCiDs, DOIs, and open data repositories, 12-1 PM, 314 Main Library

Fore more information on open access, visit the Scholarly Communication and Publishing website.