Build a Repository of DH Job Letters, T&P Files

Although there are some examples out there of people’s DH job application letters and tenure & promotion materials, there’s no central repository of professional materials to help guide people through what should be a straightforward process of presenting one’s credentials. Those of us who have served on hiring committees have all read great and terrible job letters, but most letter writers have only seen their own.

There’s even less out there showcasing the other side of the equation, particularly in the promotion  process. The ability to study not only the candidates’ tenure materials but also the letters prepared by department committees, chairs, college committees, deans, etc. would appear be of intense interest (and real utility) to people in all kinds of professional positions.

After being hired, promoted, whatever we like to put all this messiness behind us and conceal it under the professional veneer of superior credentials inexorably prevailing, but it’s a complicated process of negotiation that could use some cleansing daylight.

I think it’s safe to assume that anyone who doesn’t participate in this session has crafted his/her professional reputation from a tissue of lies.

Scripting Languages for Humanists

Fred Gibbs has proposed a session on R for Humanists, and I’d like to propose a complementary session on scripting languages for humanists. Ruby and Python are popular among digital humanists for a variety of applications. These

  • data munging
  • data analysis
  • natural language processing (with the Natural Language Toolkit)
  • geocoding (I use ggmaps, but there are lots of options)
  • automation
  • web scraping
  • solving particular programming problems of interest to humanists, such as dates
  • general purpose programming
  • web development
  • system administration (e.g. Wayne Graham’s Capistrano recipes for Omeka)

We needn’t limit the session to Ruby and Python; in fact, I hope we talk more about R. Intro to programming sessions are ever popular; that’s not what I have in mind, but maybe that would be useful too.

This session could turn into a show and tell session where we share what we’re doing with these kinds of languages and get some new ideas. We might decide to solve some problem of general interest to the group. Or we might decide to work through the exercises in the Programming Historian (Python) or the Rubyist Historian (Ruby).

new ways to publish humanities scholarship?

Let’s get together and brainstorm new possibilities for publishing humanities scholarship. If we’re not satisfied with digital versions of journal articles and monographs, what alternatives can we propose? Are there any models that look promising or interesting experiments (like Scalar) in the works? Let’s dream up what would be exciting and useful without getting bogged down in a conversation about tenure and promotion. If scholarly communication is the goal — rather than checking boxes — what do we want to do?

This conversation could focus on the specific challenges of digital humanities scholarship, or approach humanities scholarship more broadly. But let’s focus on the production and dissemination of scholarship, not on getting credit.

 

building a repository of publishing contracts

This is a call for help and contributions for a project that (if I get some collaborators) might be part of the maker challenge or might be something that lays the groundwork for a future project.

One of the tricky things about agreeing to and negotiating contracts for publishing something is an unfamiliarity with the options available. Unless you’ve done a lot of publishing, you might not have a sense even of what an author’s contract looks like. Even if you have published a lot, you might not know what a specific publisher offers—are you going to agree to write a contribution to a book only to discover that the publisher demands that you sign over your copyright and isn’t willing to negotiate? (That’s not a hypothetical example, by the way.) I’ve written about negotiating a new contributor’s contract; my experience of doing that and sharing the process suggests there’s a real hunger for advice on what contracts look like and what our options are for publishing.

What I’d like to see is a site where people can upload and share their contracts. There are possibly legal issues to sort through—I’m pretty sure that most contracts aren’t proprietary and therefore we can share them publicly, but I’m also pretty sure that most publishers might not like that. There are technical issues to sort through—what sort of platform is best for a project like this, allowing for public uploads of documents and controlled options for tagging and searching? And there are sustainability issues—this might be a project that is best run by an organization rather than an individual.

I’d love it if there were some THATCampers who wanted to think through these issues with me and to build a prototype of what it might look like. And I’d really love it if there were THATCampers who would be willing to contribute their contracts to it. (If you do contribute your contract, you probably want to black out your name and your publication’s name, but you’ll need to leave the publisher’s name visible.) If you want to contribute your contract, you can leave a link to it below or email it to me (<a href=”“>sarah.werner at gmail.com</a>)

Intro to Omeka Plugins rescheduled to facilitate fame and glory!

Just a note for people planning to attend the Intro to Omeka Plugins workshop that it has been rescheduled to the first session after scheduling on Friday.

Why? For fame, glory, and fabulous prizes!

Whose fame, glory, and fabulous prizes? YOURS!

If you have an idea for an Omeka plugin and want to dive right in, you can make your work part of the THATCamp Challenge. Take what you learn at the workshop and start building, join other new and seasoned developers in the Makerspace (RRCHNM central in Research Hall, room 470), and build something new to show off at the end of THATCamp.

Good luck!

Saturday Traffic Alert: HS Graduations on Campus

Hi Folks,

Want to warn anyone who is driving to ThatCamp on Saturday, that there are three high school graduations scheduled at the Patriot Center (which is on the same side of campus as the THATCamp festivities, near the Braddock Road entrances to GMU). Graduations will be at 9:30am, 2:30pm, and 7:00pm.

The biggest challenge will most likely be getting in for the Saturday morning sessions.

According to Mason Parking:
“Be prepared for heavier traffic up to 90 minutes prior to each ceremony and allow more time to drive to and from campus. Parking and Transportation Services encourages staff and students to use the Rappahannock River Parking Deck or the Field House parking lots to avoid traffic. Mason will be best accessed through entrances off of University Drive or Roberts Road. Check Parking and Transportation’s Facebook and Twitter pages for updates on conditions on and around campus.”

Let’s Build an Omeka Training Kit

Instructor: Sheila Brennan

    Requirements

  1. Working knowledge of Omeka.
  2. Desire to teach others how to use Omeka.

The goal of the workshop is to encourage anyone and everyone to jump in and offer Intro Omeka workshops to help train colleagues and students at their home institutions. In this new workshop, we will work together to build an open training kit for Omeka trainers. I will start by sharing my workshop outlines and will ask for others to share their experiences, so that we can build a master workshop outline with suggestions for accompanying files to make giving Intro workshops easier for all. We will make these materials available either in a Google Group or Zotero (or both) to make it easier for everyone to add, share, and build.

Please add to this public Zotero group with Omeka articles and other resources. www.zotero.org/groups/omeka

JSTOR Data for Research workshop

In this workshop we will provide both a general overview of the JSTOR Data for Research (DfR) service and a “how to” for using Hadoop and cloud computing for text mining large datasets. For the big data mining portion of the workshop we will be using a large dataset consisting of the JSTOR Early Journal Content (EJC) collection. A bundle of metadata and full text for the approximately 460,000 articles in the EJC collection can be downloaded from the DfR site. For this tutorial we have pre-loaded the EJC content into Amazon Web Service (AWS) data storage and will provide instructions on how to use the AWS Elastic Map Reduce (EMR) service for efficiently mining this dataset. In this tutorial we’ll show how to create an AWS account, develop and submit Map-Reduce jobs (written in Python) and retrieve results. The examples provided will include the generation of ngrams from full text and the identification of the top words in articles via the calculation of TF*IDF scores.