Beginning again!

Hello students, faculty, and the amazing people of the University of Illinois at Urbana-Champaign! Your home for qualitative and quantitative research assistance, the Scholarly Commons, is re-opening with brand new hours!

That’s right, for the entirety of this beautiful fall semester we will be open from 8:30 am to 6 pm!

Will the Scholarly Commons still be hosting all its fantastic services this fall?

Why yes – yes they will!

The Scholarly Commons will be hosting:

Statistical Consulting :

Mondays: 10-4

Tuesdays: 10-4

Wednesdays: 10-1, 2-5

Thursdays: 10-4

Fridays: 10-4

The Survey Research Lab from 1-4 on Thursdays

Image result for science gif

And GIS Consultations

Mondays 9-2

Tuesdays 12-4

Wednesdays 9-1

Thursdays 11-1

Image result for but wait there's more gif

The Scholarly Commons is hosting a Data Visualization Competition!

Make your data something beautiful – and you could win big!

We’re also hosting an Open House on October 9th!

Stop by Main Library 220 from 4-5:30!

Image result for welcome gif

So much to see! So much to do!

We hope to see you all soon!

This Semester at the Scholarly Commons

The sun is shining, the birds are singing, and it’s a new semester at the University of Illinois at Urbana-Champaign. And with that new semester come all of the happenings at the Scholarly Commons. We have some great things coming up!

Hours

We’re back to our normal hours. Come visit us from 9 AM – 6 PM, Monday – Friday. We hope to see you soon!

Survey Research Lab

Survey Research Lab open hours are back! Walk-ins are accepted from 2 – 5 PM every Thursday, or you can make an advance appointment by emailing Linda Owens and Karen Retzer (please copy both addresses on your email).

During Open Horus, the Survey Research Lab can look at sampling, questionnaire design, and analysis. Come in with questions about the dos and don’ts of survey wording, recommendations for designing a sampling strategy, or advice on drafting a questionnaire!

CITL Statistical Consulting

Starting January 8th and running through the end of the semester, CITL graduate students will provide free statistical consulting in the Scholarly Commons. CITL consulting will be 11 AM – 4 PM every Monday – Friday in office 306H. Consultants work with SPSS, ATLAS.ti, Stata, R, and SAS. Make an appointment for your consultation by emailing citl-data@illinois.edu.

Savvy Researcher Workshops

Our Savvy Researcher Workshop calendar is finally up! New offerings this semester include A Crash Course in Open Access and Publishing Your Research in OA, Topic Modeling Theory and Practice, Building Your Research Profile and Network, Creating Digital Books with PressBooks, Do You Know Your Fair Use Rights?, Choosing the Right Sources: Identifying Bias and Fallacies, Basics of Data Visualization, and Add Captions to Kaltura video with Automatic Speech Recognition.

Staff

The staff here at the Scholarly Commons is always ready to welcome you! Our Scholarly Commons interns, Matt Pitchford and Clay Alsup are back, as well as Megan Ozeran, our data analytics and visualization resident librarian! You can request a consultation with them or any other staff member on our Contact an Expert page.

Hope to see you soon!

Announcing Topic Modeling – Theory & Practice Workshops

An example of text from a topic modeling project.We’re happy to announce that Scholarly Commons intern Matt Pitchford is teaching a series of two Savvy Researcher Workshops on Topic Modeling. You may be following Matt’s posts on Studying Rhetorical Responses to Terrorism on Twitter or Preparing Your Data for Topic Modeling on Commons Knowledge, and now is your chance to learn the basics from the master! The workshops  will be held on Wednesday, December 6th and Friday, December 8th. See below for more details!

Topic Modeling, Part 1: Theory

  • Wednesday, December 6th, 11am-12pm
  • 314 Main Library
  • Topic models are a computational method of identifying and grouping interrelated words in any set of texts. In this workshop we will focus on how topic models work, what kinds of academic questions topic models can help answer, what they allow researchers to see, and what they can obfuscate. This will be a conversation about topic models as a tool and method for digital humanities research. In part 2, we will actually construct some topic models using MALLET.
  • To sign up for the class, see the Savvy Researcher calendar

Topic Modeling, Part 2: Practice

  • Friday, December 8th, 11am-12pm
  • 314 Main Library
  • In this workshop, we will use MALLET, a java based package, to construct and analyze a topic model. Topic models are a computational method of identifying and grouping interrelated words in any set of text. This workshop will focus on how to correctly set up the code, understand the output of the model, and how to refine the code for best results. No experience necessary. You do not need to have attended Part I in order to attend this workshop.
  • To sign up for this class, see the Savvy Researcher calendar

Preparing Your Data for Topic Modeling

In keeping with my series of blog posts on my research project, this post is about how to prepare your data for input into a topic modeling package. I used Twitter data in my project, which is relatively sparse at only 140 characters per tweet, but the principles can be applied to any document or set of documents that you want to analyze.

Topic Models:

Topic models work by identifying and grouping words that co-occur into “topics.” As David Blei writes, Latent Dirichlet allocation (LDA) topic modeling makes two fundamental assumptions: “(1) There are a fixed number of patterns of word use, groups of terms that tend to occur together in documents. Call them topics. (2) Each document in the corpus exhibits the topics to varying degree. For example, suppose two of the topics are politics and film. LDA will represent a book like James E. Combs and Sara T. Combs’ Film Propaganda and American Politics: An Analysis and Filmography as partly about politics and partly about film.”

Topic models do not have any actual semantic knowledge of the words, and so do not “read” the sentence. Instead, topic models use math. The tokens/words that tend to co-occur are statistically likely to be related to one another. However, that also means that the model is susceptible to “noise,” or falsely identifying patterns of cooccurrence if non-important but highly-repeated terms are used. As with most computational methods, “garbage in, garbage out.”

In order to make sure that the topic model is identifying interesting or important patterns instead of noise, I had to accomplish the following pre-processing or “cleaning” steps.

  • First, I removed the punctuation marks, like “,.;:?!”. Without this step, commas started showing up in all of my results. Since they didn’t add to the meaning of the text, they were not necessary to analyze.
  • Second, I removed the stop-words, like “I,” “and,” and “the,” because those words are so common in any English sentence that they tend to be over-represented in the results. Many of my tweets were emotional responses, so many authors wrote in the first person. This tended to skew my results, although you should be careful about what stop words you remove. Simply removing stop-words without checking them first means that you can accidentally filter out important data.
  • Finally, I removed too common words that were uniquely present in my data. For example, many of my tweets were retweets and therefore contained the word “rt.” I also ended up removing mentions to other authors because highly retweeted texts tended to mean that I was getting Twitter user handles as significant words in my results.

Cleaning the Data:

My original data set was 10 Excel files of 10,000 tweets each. In order to clean and standardize all these data points, as well as combining my file into one single document, I used OpenRefine. OpenRefine is a powerful tool, and it makes it easy to work with all your data at once, even if it is a large number of entries. I uploaded all of my datasets, then performed some quick cleaning available under the “Common Transformations” option under the triangle dropdown at the head of each column: I changed everything to lowercase, unescaped HTML characters (to make sure that I didn’t get errors when trying to run it in Python), and removed extra white spaces between words.

OpenRefine also lets you use regular expressions, which is a kind of search tool for finding specific strings of characters inside other text. This allowed me to remove punctuation, hashtags, and author mentions by running a find and replace command.

  • Remove punctuation: grel:value.replace(/(\p{P}(?<!’)(?<!-))/, “”)
    • Any punctuation character is removed.
  • Remove users: grel:value.replace(/(@\S*)/, “”)
    • Any string that begins with an @ is removed. It ends at the space following the word.
  • Remove hashtags: grel:value.replace(/(#\S*)/,””)
    • Any string that begins with a # is removed. It ends at the space following the word.

Regular expressions, commonly abbreviated as “regex,” can take a little getting used to in order to understand how they work. Fortunately, OpenRefine itself has some solid documentation on the subject, and I also found this cheatsheet valuable as I was trying to get it work. If you want to create your own regex search strings, regex101.com has a tool that lets you test your expression before you actually deploy it in OpenRefine.

After downloading the entire data set as a Comma Separated Value (.csv) file, I then used the Natural Language ToolKit (NLTK) for Python to remove stop-words. The code itself can be found here, but I first saved the content of the tweets as a single text file, and then I told NLTK to go over every line of the document and remove words that are in its common stop word dictionary. The output is then saved in another text file, which is ready to be fed into a topic modeling package, such as MALLET.

At the end of all these cleaning steps, my resulting data is essentially composed of unique nouns and verbs, so, for example, @Phoenix_Rises13’s tweet “rt @drlawyercop since sensible, national gun control is a steep climb, how about we just start with orlando? #guncontrolnow” becomes instead “since sensible national gun control steep climb start orlando.” This means that the topic modeling will be more focused on the particular words present in each tweet, rather than commonalities of the English language.

Now my data is cleaned from any additional noise, and it is ready to be input into a topic modeling program.

Interested in working with topic models? There are two Savvy Researcher topic modeling workshops, on December 6 and December 8, that focus on the theory and practice of using topic models to answer questions in the humanities. I hope to see you there!

This Semester at the Scholarly Commons

The sun is shining, the birds are singing, and it’s a new semester at the University of Illinois at Urbana-Champaign. And with that new semester come all of the happenings at the Scholarly Commons. We have some great things coming up!

Hours

We’re back to our normal hours. Come visit us from 9 AM – 6 PM, Monday – Friday. We hope to see you soon!

Survey Research Lab

Survey Research Lab open hours are back! Walk-ins are accepted from 2 – 5 PM every Thursday, or you can make an advance appointment by emailing Linda Owens, Sowmya Anand, and Karen Retzer (please copy all addresses on your email).

During Open Horus, the Survey Research Lab can look at sampling, questionnaire design, and analysis. Come in with questions about the dos and don’ts of survey wording, recommendations for designing a sampling strategy, or advice on drafting a questionnaire!

CITL Statistical Consulting

Starting August 28th and running through the end of the semester, CITL graduate students will provide free statistical consulting in the Scholarly Commons. CITL consulting will be 11 AM – 4 PM every Monday – Friday. Consultants work with SPSS, ATLAS.ti, Stata, R, and SAS. The consultants may take walk-ins, but you can also email statconsulting@illinois.edu for an appointment.

Savvy Researcher Workshops

Our Savvy Researcher Workshop calendar is finally up! New offerings this semester include Understanding Bias: Evaluating News & Scholarly Sources, Copyright for Educators,Conducting Research with Primary Sources and Digital Tools, Managing Your Copyrights, and Finding Data about Residential Real Estate, and more. Of course, old favorites will be offered, as well!

Staff

We have some new and returning staff members at the Scholarly Commons! Digital Scholarship Liaison and Instruction Librarian Merinda Hensley is back from sabbatical, and Carissa Phillips is now the Data Discovery and Business Librarian. We’re also welcoming Data Analytics and Visualization Resident Librarian Megan Ozeran, as well as Scholarly Commons Interns Clay Alsup and Matt Pitchford, and Graduate Assistants Billy Tringali and Joe Porto. Stop in and say hello!

What To Do When OCR Software Doesn’t Seem To Be Working

Optical character recognition can enhance your research!

While optical character recognition (OCR) is a powerful tool, it’s not a perfect one. Inputting a document into an OCR software doesn’t necessarily mean that the software will actually output something useful 100% of the time. Though most documents come out without a hitch, we have a few tips on what to do if your document just isn’t coming out.

Scanning Issues

The problem may be less with your program and more with your initial scan. Low-quality scans are less likely to be read by OCR software. Here are a few considerations to keep in mind when scanning a document you will be using OCR on:

  • Make sure your document is scanned at 300 DPI
  • Keep your brightness level at 50%
  • Try to keep your scan as straight as possible

If you’re working with a document that you cannot create another scan for, there’s still hope! OCR engines with a GUI tend to have photo editing tools in them. If your OCR software doesn’t have those tools, or if their provided tools aren’t cutting it, try using a photo manipulation tool such as Photoshop or GIMP to edit your document. Also, remember OCR software tends to be less effective when used on photographs than on scans.

Textual Issues

The issues you’re having may not stem from the scanning, but from the text itself. These issues can be more difficult to solve, because you cannot change the content of the original document, but they’re still good tips to know, especially when diagnosing issues with OCR.

  • Make sure that your document is in a language, and from a period that your OCR software recognizes; not all engines are trained to recognize all languages
  • Low contrast in documents can reduce OCR accuracy; contrast can be adjusted in a photo manipulation tool
  • Text created prior to 1850 or with a typewriter can be more difficult for OCR software to read
  • OCR software cannot read handwriting; while we’d all like to digitize our handwritten notes, OCR software just isn’t there yet

Working with Digital Files

Digital files can, in many ways, be more complicated to use OCR software on, just because someone else may have made the file. This means that a file is lower-quality to begin with, or that whoever scanned the file may have made errors. Most likely, you will run into scenarios that are easy fixes using photo manipulation tools. But there will be times that the images you come across just won’t work. It’s frustrating, but you’re not alone. Check out your options!

Always Remember that OCR is Imperfect

Even with perfect documents that you think will yield perfect results, there will be a certain percentage of mistakes. Most OCR software packages have an error rate between 97-99% per character. While this may seem like it’s not many errors, in a page with 1,800 characters, there will be between 18 and 54 errors. In a 300 page book with 1,800 characters per page, that’s between 5,400 and 16,200. So always be diligent and clean up your OCR!

The Scholarly Commons

Here at the Scholarly Commons, we have Adobe Acrobat Pro installed on every computer, and ABBYY FineReader installed on several. We can also help you set up Tesseract on your own computer. If you would like to learn more about OCR, check out our LibGuide and keep your eye open for our next Making Scanned Text Machine Readable through Optical Character Recognition Savvy Researcher workshop!

Writing the next great American novel, or realistically, finding the “write” tools to finish your thesis

The Scholarly Commons is a great place to write the next great American novel; in fact, I’m surprised it has not happened yet (no pressure dear patrons — we understand that you have a lot on your plates). We’re open Monday-Friday from 9-6 and enjoy a well-lit, fairly quiet, and overall ideal working space, with Espresso Royale and the Writing Center nearby. But actually getting that writing done, that’s the real challenge. Luckily, we have suggestions for tools and software you can use to keep writing and stay on track this semester!

Writing Your First Draft:

Yes, MS Word can be accessed for free for University students through the Web Store and you can set it up to better address your research needs with features like the Zotero and Mendeley plugins to incorporate your references. And don’t forget you can go to Word > File > Options > Proofing > Writing Style and select Grammar and Style and Settings to set what Spellcheck will check for so that passive voice gets underline. However, believe it or not, there are word processors, other than MS Word, that are better for organizing and creating large writing projects, such as novels, theses, or even plays!

Scrivener

Scrivener is a word processor created with novelists in mind that lets you organize your research and notes while you are writing. With an education discount, a license for Scrivener costs $38.25. Scrivener is very popular and highly recommended by two of the GAs here at Scholarly Commons (you can email Claire Berman with any questions you may have about the software at cberman2 [at] illinois.edu). To really get started, check out our online copies of Scrivener: An Absolute Beginner’s Guide and  Scrivener for Dummies!

Mellel

Unfortunately, Mellel is only available on Mac. An educational license for the software costs $29. To some extent Mellel is similar in style and price to Pages for Mac, but also shares similarities with MS Word for Mac. However, this word processor offers more options for customizing your word processing experience than Pages or MS Word. It also provides more options for outlining your work and dividing sections in a way that even MS Word Notebook version does not, which is great if you have a large written work with many sections, such as a novel or a thesis! Mellel also partners with the citation managers Bookends and Sente.

Markdown Editors like Ulysses

Ulysses is a simple and straightforward word processor for Mac, but you do have to write in Markdown without a WYSIWYG editor. It costs $44.99 for Mac and $24.99 for iOS. However, it has many great features for writers (such as built in word count writing goals for sections of a paper, and Markdown makes outlining work very easy and simple). We have discussed the value and importance of Markdown elsewhere on the blog before, specifically in our posts Digital Preservation and the Power of Markdown and Getting Started with Markdown, and of course, want to remind all of our lovely readers to consider doing their writing in Markdown. Learning Markdown can open up writing and digital publishing opportunities across the web (for example: Programming Historian tutorials are written in Markdown). Plus, writing in Markdown converts easily for simple web design without the headache of having to write in HTML.

Staying Focused:

Maybe you don’t want to buy a whole new word processor. That’s fine! Here are some tools that can help creating the “write” environment to get work done:

Freedom : costs $2.50 a month, so Freedom is not free, indeed. This is an an app that allows you to block websites and even the internet, available for Mac, Windows, iOS devices. This app also has a lock feature that will not allow you to make changes to what is blocked for a set period of time.

RescueTime : another app option. Taking a slightly different approach to the rest here, the lite version of this app helps you track how you use your time and what apps and websites you use the most so that you can have a better sense of what you are doing instead of writing. The premium version, which costs $54 a year, allows you to block distracting websites.

SelfControl: a Mac option but Open Source, with community built Linux and PC versions, and most importantly it’s free! This app allows you to block websites, based on their server, for a set period of time, in which there is basically NOTHING you can do on your computer to access these sites. So choose which sites to block and the time limit wisely.

Editing Tools:

Hemingway

Named after Ernest Hemingway, this text editor is supposed to help you adapt his style of writing, “bold and clear.” When you paste your text into the free web version, the applet gives you the text’s reading level as well as pointing out instances of awkward grammar, unnecessary or complicated words and adverbs, and sentences that are too long or too complicated.There’s a Desktop version available for $20 though I honestly don’t think it’s worth the money, though it does give another simple space on your computer to write and get feedback.

A note about Grammarly 

This is an alternative to MS Word spell check with a free version to add to your browser. As a browser add-in, it checks automatically for critical spelling and grammar mistakes (advanced ones cost a monthly fee) everywhere you write except situations where you’d really want extra spell check such as Google Docs and can be wonky with WordPress. You can always copy and paste into the Grammarly window, but at that point, you’re probably better doing spell check in MS Word. There are also only two versions of English available, American and British (take that Australia!). If you are trying to learn English and want instantaneous feedback while writing on the internet, or studying for high school standardized tests, or perhaps a frequent YouTube commenter in need of a quick check before posting, then Grammarly is for you. For most people at Scholarly Commons, this is a plugin they can skip, though I can’t speak for the paid version which is supposed to be a little bit better. If you uninstall the app they try to guilt trip you, so heads up.

SpellCheckPlus: It’s BonPatron in English! Brought to you by Nadaclair Language Technologies, this web-based text editor goes beyond MS Word’s spellcheck to help identify grammar errors and ways to make your writing sound more normal to a native (Canadian) English speaker. There is a version that costs money but if you don’t import more than the allotted 250 words of text at one time you will be fine using the free version.

Let us know what you think and any tools we may have missed! Happy writing!

And to learn more and find more great productivity tools, check out:

Personal Information Management LibGuide

Announcing the Scholarly Commons Internship

If you’re a University of Illinois at Urbana-Champaign doctoral student in the humanities or social sciences, we have a fantastic opportunity opening up for the 2017-2018 academic year! The Scholarly Commons Internship is a graduate hourly internship open to doctoral students who have completed coursework. Here is the full listing:

POSITION DESCRIPTION

The Scholarly Commons Internship Program is offering up to two one-­year graduate hourly internships for the 2017-2018 academic year for University of Illinois at Urbana-­Champaign doctoral students in the humanities and social sciences. Students currently in the 3rd year or beyond of their doctoral program who have completed required coursework are eligible to apply. One intern will focus on the digital humanities; the second will focus on the digital humanities or computational social sciences.

 

Scholarly Commons Interns will be expected to be in-residence approximately ten hours per week. The majority of their time will be allocated to pursuit of research projects proposed by the Intern that draw upon Scholarly Commons and University Library resources and materials. These projects may intersect with the Interns’ dissertation research. Interns also will contribute a select amount of time to Scholarly Commons’ training and research support services in the digital scholarship topics and skill areas in which they have expertise. Example contributions could include teaching a Savvy Researcher workshop, holding scheduled open hours in the Scholarly Commons, or consulting with researchers by appointment.

 

Graduate interns will also be assigned one or more “library mentors” from the Scholarly Commons and University Library.  Interns may work with Library staff outside the Scholarly Commons depending on the projects they pursue during the internship period.

 

CRITERIA:

  • Open to doctoral students who have completed all coursework
  • Expertise or strongly demonstrated interest in research in the humanities or social sciences
  • Demonstrated interest or experience in digital scholarship
  • Demonstrated skill(s) in computational tools and approaches related to Scholarly Commons resources and services (e.g., statistical analysis in SAS, GIS, Python programming, data visualization, text mining)

APPLICATION MATERIALS:

  • Letter of application
  • Statement of your current or proposed research in digital scholarship and how it will use 
University of Illinois Library and/or Scholarly Commons resources (one page max)
  • Current curriculum vitae

The internship will pay an hourly rate of $20.98 for approximately 10 hours per week during the fall 2017 and spring 2018 semesters.

Submit application materials by May 30, 2017 to Emilie Staubs (estaubs [at] illinois [dot] edu).

Please do not hesitate to contact us with any questions you may have about the position or how to apply. Good luck, and we look forward to hearing from you!

Spotlight: JSTOR Labs Text Analyzer

JSTOR Labs has recently rolled out a beta version of a JSTOR Text Analyzer. The purpose of the Text Analyzer is different than other text analyzers (such as Voyant). The JSTOR Text Analyzer will mine documents you drop into its easy-to-use interface, and then breaks it down by topics and terms, which it will then search JSTOR with. The result? A list of JSTOR articles that relate to your research topic and help fill your bibliography.

So, how does it work?

You simply drag and drop a file– their demo file is an article named “Retelling the American West in the Museum” –, copy and paste text, or select a file from your computer and input it into the interface. What you drag and drop does not, necessarily, have to be an academic article. In fact, after inputting a relatively benign image for this blog, the Text Analyzer gave me remarkably useful results, relating to blogging and learning, the digital humanities and libraries.

Results from the Commons Knowledge blog image.

After you drop your file into JSTOR, your analysis is broken down into terms. These terms are further broken down into topics, people, locations, and organizations. JSTOR deems which terms it believes are the most important and prioritizes them, and even gives specific weight to the most important terms. However, you can customize all of these options by choosing words from the identified terms to become prioritized terms, adding or deleting prioritized terms, and changing the weight of prioritized terms. For example, here are the automatic terms and results from the demo article:

The automatic terms and results from the demo article.

However, I’m going to remove article’s author from being a prioritized term, add Native Americans and Brazilian art to the prioritized terms, and change the weight of these terms so that the latter two are the most important. This is how my terms and results list will look:

The new terms and results list.

As you can see, the results completely changed!

While the JSTOR Text Analyzer doesn’t necessarily function in ways similar to other text analyzers, its ability to find key terms will help you not only find articles on JSTOR, but use those terms in other databases. Further, it can help you think strategically about search strategies on JSTOR, and see which search terms yield (perhaps unexpectedly) the most useful results for you. So while the JSTOR Text Analyzer is still in beta, it has the potential to be an incredibly useful tool for researchers, and we’re excited to see where it goes from here!

Creating a Professional Website on Weebly

One important step to having a good online scholarly presence is to have your own professional website. Weebly is a fairly easy to use and free content management system that you can use to create a customized page for yourself or a team. It is one of the website builders supported by the iSchool, which you can find more information about at this link.

How to Create a Weebly Website

Step 1) Login to Weebly either by creating a new Weebly account or linking to a Facebook or Gmail account. If this is a professional website think very carefully about whether or not you want this in any way connected to your Facebook (after all, future employers don’t care how much fun you had during spring break or want to see those conspiracy theory articles your uncle keeps sharing).

Step 2) Choose a theme! There used to be a lot more themes available on Weebly but those days are over. You have a couple options for very basic themes that with the addition of some images will help you instantly create a classy portfolio page, and any theme is fair game, though the ones under “Portfolio” and “Personal” and “Blog” are more suited for creating a professional website.

Step 3) You will be prompted to choose your web domain. You can have a free dot weebly site. Try to get some variation on your name as your website, you can also use the name of your company or organization.

Image of mothersagainstbearattacks sign up

Note: as of writing (23 Jan. 2017) this Weebly domain name is still available!

However, if this is too much pressure for now you can start creating your site and won’t have to really settle down on a name until you publish the site.

If inspiration strikes before that go to Settings >  Site Address

Step 4) Adding pages. Whichever theme you choose likely comes with Home, Blog, Contact pages that you can click on different elements of to edit. However, to add a new page or a certain type of page:

circled

Step 5) Customizing Pages. For simple edits, simply click on what you want to replace and add new content. To add new content, there’s a sidebar full of options! Even more if you add apps to your site or pay for Weebly. Simply drag and drop and arrange the content types on your site.

As an example, we’ll create a contact page.

If you want people to reach out to you it’s great to have a page where they can do that. We do not recommend writing your email out on pages because that’s a good way for spambots to find you. However, Weebly makes it easy to add a contact form: simply Click “Build” and drag and drop the contact form.

Editing contact information for Weebly contact form

If you have a physical location where you tend to be such as an office (lucky you!) or a coffee shop that pretty much is your office, then you can add a Google map as well to show people what building it is in. Though if your office is in the Armory (or certain parts of Main Library for that matter) you should probably include more specific instructions so that people don’t spend years trying to find it. You can also link your LinkedIn profile to your site by dragging and dropping that icon as well.

And don’t forget to include your ORCID, (if you haven’t created one yet, we suggest you check out this ORCID information)!

Special note: Adding stock images

Of course, your professional website should include at least one picture of you in a professional setting. Weebly has a number of stock images you can choose from that can look very nice. But what if you want something a bit more customized? For your professional website, make sure that you have proper permissions for any images that you use! Copyright infringement is very unprofessional. To learn more about finding copyright friendly stock images check out the Finding and Using Images LibGuide. And please feel free to take a look at our Scholarly Commons copyright resources. For more specific questions, you can reach out to Assistant Professor & Copyright Librarian Sara Benson.

Further Resources:

Still confused about Online Scholarly Presence? We have not one but TWO LibGuides to help you understand: Online Scholarly Presence Seminar and Create & Manage an Online Scholarly Presence.

Here is a video from a few years ago explaining more in general about creating a professional website, hosted by the University of Illinois.