Process: going from the lecture room to the web

I thought it might be useful to jot down some notes on my process this semester for going from the lecture room to the web with my course material. Since starting to teach at SMU, I’ve tried each semester to (1) make available lecture media for students and even the general public, with preference to having it available for my students and (2) making that process as simple as possible to minimize my time.

That said, I am quite proficient on multiple software platforms, including Linux, Mac OS, and Windows. That makes some things a lot easier for me. I tend to use Linux and Mac OS, so that’s where my process starts.

I currently record lecture audio using my iPad. I find that the microphone is just great in that for picking up omnidirectional sound. It seems to capture student questions sufficiently well in a room of about 60 students, and it certainly picks up audio from the lecturer or lecturers in the room. I don’t have to lug an external mic around with me any more.

I get the audio into my personal cloud so that I have a backup of it, and can then download it to another machine for editing the audio, adding video, etc. I currently use the ownCloud application to do this. I run a server at home, with about 3TB of storage available through my own private ownCloud server. The iPad application can send files from the iPad to the ownCloud instance, so after lecture I tell the recording application (AudioMemos) to send the .wav file to my ownCloud server. An 1.5 hours of lecture is about 500MB, so this takes some time to upload. During this phase, I get other work done, or get a coffee and a sandwich.

At home, I then download the audio from my ownCloud server. I edit the audio using Audacity, if I only want to make an audio lecture podcast.

If I want to make a video lecture podcast, I use Kdenlive. I pull the audio into the Kdenlive project editor. To place slides in the video stream, I dump my PDF to a bunch of graphics files, e.g.

convert -density 150 lecture.pdf lecture-%03d.png

and then pull the images files into the Kdenlive project and drop them so that they sync up with the audio stream.

Sometimes, I shoot video in class. If there is a demonstration, it’s nice to have video of it. I’ve found the easiest thing to do, if you have a Mac, is to just use QuickTime to record a movie and then upload the movie to ownCloud via a web browser. If using Linux, the program Cheese works just find for capturing video.

Syncing the video with the audio is the hardest part. I usually zoom way in on the video and audio streams in Kdenlive and drag the video until the sound tracks from the podcast and the video line up (I can hear the same words spoken at the same time in the two audio tracks, with less than a 1/10th of a second delay). Then I delete the video’s audio track and lock the video and audio together so that they can be edited as a single entity. This can be the most time consuming, but I usually find it never takes more than 1 hour to edit a lecture (audio, video, slides) into a ready-to-export video project.

I then render a single video file from Kdenlive. I like to export in Ogg Video format (OGV). I can then upload this to YouTube or another service and share the link with students.

Here are my resources for the above workflow. Your mileage may vary.

  • AudioMemos: an iPad app for recording audio. Simple. Lets you pause the recording and resume later, as needed.
  • Audacity: an open-source audio editor for all platforms.
  • ownCloud: run your own secure cloud
  • Kdenlive: a KDE-based video editor. Linux and FreeBSD (Mac OS) only.
  • Recording video in lecture: QuickTime Player on Mac OS, or Cheese on Linux.

Leave a Reply

Your email address will not be published. Required fields are marked *