- Find Articles & More
- Find Books & More
- Guides to Research
- Information Literacy
- Digital Scholarship
- Patron Services
- Room Reservations
- Interlibrary Loan
- Faculty Support
- Also in the Library
- About Us
- Library Hours
- Library Policies
- Need Help?
Perhaps the reality of inhabiting a structure for which the assembly of requires “minimal formal skill or training” would be less than ideal. Nonetheless, the WikiHouse project is one of my favorite examples of something made available under a creative commons license. Part of why I find this project so intriguing is its potential as a unique entry point for talking to people about open-access and the creative commons. The ubiquity of makerspaces are proof, people love this kind of stuff. Imagine teaching a classroom full of students about open access publications they can use for their research and digital media they are free to use in their projects, all while they sit on open-source stools. This scenario could demonstrate to students, in a very tangible way, the power of creating something and sharing it openly under a creative commons license.
My last post examined a tool for exploring current Census data and exporting it in an easy to use format. Now what about historical Census data? Not the data from a few decades ago – we’re talking about the really old stuff. Finding this type of historical Census data is notoriously difficult, more so than finding new data. Sifting through the Decennial Censuses that have been digitized is overwhelming for your average library user. Propriety services that offer access to some historical census data with added value, such as GeoLytics, are typically expensive and not always chronologically comprehensive. Fortunately for us, as is often the case, libraries fill the void between the unpolished raw data and the propriety systems that add costly value to this data.
Finding government information can be challenging, even for those of us practiced in the task. Uncovering government data in a form that is easily usable can be even more difficult, graying the hair of many a social scientist.
Investigative Reporters & Editors had built an interface (census.ire.org) that facilitates locating and downloading data from the U.S. Census. Along with connecting users to Census data, the site provides concise descriptions of the geographical units over which the Census is measured. The project is supported by the Donald W. Reynolds Journalism Institute at the University Of Missouri School Of Journalism.
On Tuesday, September 9th I will be teaching a workshop on data visualization for the IUPUI Arts & Humanities Institute, “Introduction to Data Visualization I: Visualization with Gephi.” For the uninitiated, Gephi is an open-source network visualization program. The tool is ideal for networks of any size. It offers a vast array of network analysis and visualization options, including geospatial layouts for data, statistical measures for social network analysis, and dynamic network visualization. Gephi handles a variety of data formats and allows the construction of datasets within the tool itself, perfect for those working with smaller amounts of data. Gephi runs on Windows, Mac, and Linux operating systems.
Aside from preparing for the onslaught of instruction that will be fall semester, my time lately has been spent exploring topic modeling (I realize that I am somewhat late to the game on this, but it has been on my ‘to do’ list for a while now). After installing MALLET, a java-based natural language processing package that facilitates topic modeling among other things, reading this helpful tutorial, and seeing evidence of topic modeling’s utility for analyzing large volumes of text, I am intrigued but also somewhat overwhelmed. The further I move away from introductory explanations of topic modeling, like David M.
Do a Google image search for data visualization and undoubtedly you will see many examples of networks, otherwise known as graphs. The identification and study of these networks is useful in a variety of fields from social network analysis in sociology and social informatics to the study of predation networks in ecology. If you can identify connections between groups of entities, then you can study it using some aspect of network theory. However, the visual representations of these networks as graphs are often difficult to interpret. This post intends to shed some light onto the topic of network visualizations.
Essentially, networks are data structures that represent relationships between entities. For example, Author A writes an article with Author B. Obviously in this case the authors are the entities and are connected through their co-authoring relationship. Graphs consist of nodes (entities) and edges (relationships that connect the entities). We might visually represent the previous example as:
I recently attended the Federal Depository Library Conference in Washington D.C. Among the many interesting topics discussed, one in particular caught my attention and got me thinking about the way my duty as a documents librarian and as a member of our Digital Scholarship Team overlaps: promoting access to and preserving born-digital government information.
Over the past decade the amount of government information online far outpaced the number of documents printed by the Government Printing Office (GPO) for distribution through the Federal Depository Library Program (FDLP) (Jacobs, 2014). The sheer volume of this information makes both providing access (at least through bibliographic control) and ensuring preservation extremely difficult. What’s worse, much of this information is transitory and is lost when administrations change or Congressional committees disband.
On April 11th, members of the Digital Scholarship Team presented initial work in analyzing the text of the Indianapolis Recorder at IUPUI Research Day. The Indianapolis Recorder is one of the nation’s oldest and most prominent African American newspapers, and the University Library has a digitized collection covering the vast majority of issues published between 1899 and 2005. The full text of more than 96,000 pages is currently available for export from the ContentDM platform as a tab delimited or XML text file (1.3GB).