3 Days on Capitol Hill

As a member of the SLUO executive committee and co-chair of the Washington D.C. Trip subcommittee, I wanted to update people on the status of our annual trip to DC. There are currently 11 members of the team going to DC, representing diverse areas of the SLAC PPA user community: astronomy and astrophysics, collider physics, and neutrino physics. SLAC is part of a larger multi-team effort, representing users at SLAC and FNAL and in the US-LHC community. In total, there are about 45 people going, targeting about 220 Congressional offices.

Our focus will be the FY10 budget. FY09 is a done deal, and the stimulus is old news. Our focus is to keep up the effort in Congress to fund science as a priority, in the face of growing deficits and difficult political realities. We will communicate the nature and value of our work, its impact on districts, states, and the nation, and call for continued commitment to the doubling of the combined physical sciences budgets in NIST, NSF, and DOE as envisioned in the America COMPETES Act (ACA). The Congress authorized this doubling over 7 years, and the FY09 budget represented the first step in the commitment to that goal.

In the face of economic turmoil and competing political interests, we are eager to stress the power of curiosity-driven research to educate and train the next generation of leaders in science, technology, and many other areas of economic interest. We are excited to share a little about our work, hear what the Congress has to say  – their fears, their concerns, their priorities – and continue a conversation that, we hope, will serve as a relationship between Congress and the science they fund.

The trip is April 28-May 1. If you have ideas or comments, you can send them to sluo.dc@gmail.com.

Electronic Logbooks

I’ve experimented over many years with electronic logbooks. I got tired of literally cutting and pasting print-outs of plots into paper logbooks, swollen  by glue and pictures, back in 2001. Since then, I’ve experimented with a number of different electronic logbook technologies. It’s important to me that they can be backed up, and that data is easily retrieved from them. I here report what I’ve used, what I’ve liked, and what I’ve not liked.

ELOG: (https://midas.psi.ch/elog/) This is the Swiss Army knife of open-source, free logbooks. It can run locally, or on a server at home or work. It can be used for personal logs, or for collaboration. I’ve used it both ways. When I was at MIT, I used it to organize analysis between collaborators at MIT and SLAC. It can send e-mails when there are new entries, or updates to old ones. It allows HTML or plain text entries, embedded graphics or attached graphics. One of the coolest features, I think, is that it will take an attached multi-page PDF file and render the pages as individual graphics in the logbook entry. This lets you read a PDF file without opening it. I recommend this to people who need to collaborate, and for personal use for people who like a beefy logbook.

Xournal: (http://xournal.sourceforge.net/) Also open-source and free, this is good for the tablet PC crowd. You can handwrite entries, load graphics as backgrounds and mark them up (great for editing PDF files!). It combined old-school hand-writing with new-school electronic record-keeping. I’ve even considered writing slides in this, getting back to the old overhead projector days of giving seminars or talks. What’s held me back from doing so is that you can’t place graphics arbitrarily on a page. That prevents me from using it for talks, and from using it as a serious logbook.  A big drawback is the access to data in text form in the document – hand-written logbooks can’t just be cut-and-pasted into e-mails, or the web. But it has lots of potential, and is definitely worth watching.

Tiddlywiki: (http://www.tiddlywiki.com/) For the wiki, stream-of-consciousness crowd, this is the way to go. I’ve actually switched from ELOG to Tiddlywiki for my logbook very recently, thanks to a rediscovery of this prompted by a student. It supports the creations of “tiddlers”, units of self-contained information that can be linked to, or can link to, other tiddlers. It also supports “journal entries”, which are tiddlers whose name (and thus hyperlink) are based on the current calendar date, with one entry per day. You can embed graphics, store the whole thing in CVS and sync it (or use rsync to back it up), and best of all you can edit it straight through the web browser. Since I use Zotero for storing papers, and a lot of physics information is stored on the web, it’s great to have an all-in-one place to do my note-taking. Tiddlywiki, and variants on it, have become my go-to way of rapidly developing documentation or a logbook.

Hope this opens some doors for you!

The Flavor Legacy

It’s been a while since I put anything in the old professional blog, so I thought I would start being more regular about posts here. I’ll begin with some thoughts about the unique situation that the current generation flavor factories find themselves in as their colliders turn off. Currently, there are three major flavor factory data samples in the world (in alphabetical order of acronym): BaBar, Belle, and CLEO/CLEO-c. These are extremely unique samples; BaBar and Belle together have about 1.5 billion B meson pairs for use in precision and discovery physics, and have equivalently large samples of tau leptons and charm mesons. In addition, BaBar has the largest sample of data taken at the Upsilon(3S) and Upsilon(2S) resonances, and the most detailed scan data for center-of-mass  (CM) energies between 10.58 and 11.2 GeV. Belle has the world’s largest samples of data taken on the Upsilon(1S), Upsilon(5S), and Upsilon(6S) resonances. CLEO-c has unique data samples of charm mesons taken at well defined CM energies just above charm threshold, or at specific charmonium resonances.

This in and of itself is not so worrying; these are all good things. The worrying thing is that there is a possibility that these datasets will go unchallenged until at least 2015, and maybe much later. That’s because a next-generation flavor factory won’t run until at least then. Meanwhile, the Large Hadron Collider will be steadily taking hadron collision data, searching for evidence of physics beyond the Standard Model. The challenge is the following: should new physics be found, its nature can be attacked with precision measurements possible only at a flavor factory. Will the existing datasets be enough? If so, who will conduct the research on these unique samples and test the implications of that new physics? If those samples are not enough, will we be ready with the R&D and proposals needed to build the next-generation flavor factory?

As so many of us begin to think about life after the factories, we take our unique set of knowledge about physics analysis at these experiments out of that community and into others. Keeping a hand in the legacy data is important, if it’s possible for young scientists as they mature and get promoted in the field. It’s therefore also important for senior members of the field, whose careers are established, to continue playing a role in this data.

How the future of this data is to be secured is also under discussion. There are evolving plans about how to archive legacy data sets like those at the flavor factories. Nothing is certain, except this: the data collected by these astounding machines and their devoted collaborations should not be allowed to fade from memory, lest we forget that a new discovery must always be tested in every way possible. Discovery is important, but confirmation and implication are just as important. They define the value of our science, turning it from a bunch of sexy headlines into a cannon of serious knowledge that defines the shoulders of the giant.