Live Capture Plus and QuickTime for Java

One of the particular challenges of video preservation is how to handle and preserve the content on digital tape formats from the latter days of magnetic A/V media: Digital Betacam, DVCam, DVCPro, etc. Caught in the nebulous time of transition between analog and digital signals (the medium itself, magnetic tape, is basically the same as previous videotape formats like VHS or Betacam – but the information stored on them was encoded digitally), these formats were particularly popular in production environments but there were plenty of prolific consumer-grade efforts as well (MiniDV, Digital8). In some ways, this makes transferring content easier than handling analog formats: there is no “digitization” involved, no philosophical-archival conundrum of how best to approximate an analog signal into a digital one. One simply needs to pull the digital content off the magnetic tape intact and get it to a modern storage medium (hard disk, solid-state, or maybe LTO, which yes I know is still magnetic tape but pay no attention to the man behind the curtain).

 https://twitter.com/dericed/status/981965351482249216?ref_src=twsrc%5Etfw&ref_url=https%3A%2F%2Ftweetdeck.twitter.com%2F

However, even if you still have the proper playback deck, and the right cables and adapters to hook up to a contemporary computer, there’s the issue of software – do you have a capture application that can communicate properly with the deck *and* pull the digital video stream off the tape as-is?

The last bit is getting especially tricky. As DV-encoded formats in particular lost popularity in broadcast/production environments, the number of applications that can import and capture DV video without transcoding (that is, changing the digital video stream in the process of capture), while staying compatible with contemporary, secure operating systems/environments, has dwindled. That’s created a real conundrum for a lot of archivists. Apple’s Final Cut Pro application, for instance, infamously dropped the ability to capture native DV when it “upgraded” from Final Cut Pro 7 to Final Cut X (you can hook up and capture tapes, but Final Cut X will automatically transcode the video to ProRes). Adobe Premiere will still capture DV and HDV codecs natively, but will re-package the stream into a .mov QuickTime wrapper (you can extract the raw DV back out, though, so this is still a solid option for many, though of course for just as many more an Adobe CC subscription is beyond their means).

One of the best options for DV capture is (was?) a Mac application called Live Capture Plus, made by Square Box Systems as part of its CatDV media suite. It has great options for error handling (e.g. trying to read a problem area of a tape multiple times automatically if there’s dropout), generating DV files based on clips or scenes or timecode rather than the whole tape, remote tape deck control over the FireWire/Thunderbolt connection, etc; a bunch of ingesting stuff that’s more appealing to an archivist than an application primarily for editing, like Adobe Premiere. It also talks to you, which is fun but also terrifying.

Failed to power up

However, Square Box removed Live Capture Plus from its product list some years back, and as far as I’m aware has refused all pleas to either open-source the legacy code or even continue to sell new licenses to those in the know.

Let’s say you *are* lucky enough to still have an old Live Capture Plus license on hand, however. The Live Capture Plus GUI is built on Java, but an older, legacy version of Java, so when you first try to run the app on a contemporary OS (~10.10 and up), you’ll first see this:

Luckily, for at least the moment, Apple still offers/maintains a download of this deprecated version of Java – just clicking on “More Info…” in that window will take you there, or you can search for “Java for OSX” to find the Apple Support page.

OK, so you’ve downloaded and installed the legacy Java for OSX. Yet this time, when you try again to run Live Capture Plus, you run into this fun error message instead:


All right. What’s going on here?

When I first encountered this error message, even though I didn’t know Java, this error message provided two clues: 1) “NoClassDefFoundError” – so Java *can’t find* some piece that it needs to run the application correctly; and 2) “quicktime”/”QTHandleRef” – so it specifically can’t find some piece that relates to QuickTime. That’s enough to go on a search engine deep dive, where I eventually found this page where researchers at the University of Wisconsin-Madison’s zoology/molecular biology lab apparently encountered and solved a similar issue with a piece of legacy software related to, near as I can figure from that site, taking images of tiny tiny worms. (I desperately want to get in touch and propose some sort of panel with these people about working with legacy software, but am not even sure what venue/conference would be appropriate)

The only way my ’90s-kid brain can understand what’s happening

So basically, recent versions of Mac OS have not included key files for a  plugin called “QuickTime for Java” – a deprecated software library that allowed applications built in/on Java (like Live Capture Plus) to provide multimedia functionality (playback, editing, capture) by piggybacking on the QuickTime application’s support for a pretty wide range of media formats and codecs (including DV). Is this fixable? If you can get those key files, yes!

For now, both the downloads and instructions for what to do with these three files are available on that Hardin Lab page, but I’m offering them here as well. The fix is pretty quick:

 

I would only note a couple things, which is that I’ve successfully installed on to macOS Sierra (10.12) without needing to mess with the System Integrity Protection settings by just installing for the local user (i.e. putting the files into ~/Library rather than /System/Library); and that if you want to do this via Finder rather than the command line, here is how to get to the “View Options” box to reveal the Library folder in Finder, as mentioned in Step 1 above (a useful step in general, really, if you’re up to digipres shenanigans):

Once these files are in place, Live Capture Plus should open correctly and be able to start communicating with any DV/FireWire-capable deck that you’ve got connected to your Mac – again, provided that you’ve got a registration code to enter at this point.

A final word of warning, however. Live Capture Plus comes from the era of 32-bit applications, and we’re now firmly in the era of 64-bit operating systems. Exactly what all that means is probably the subject of another post, but basically it’s just to say that legacy 32-bit apps weren’t made to take advantage of modern hardware, and may run slower on contemporary computers than they did on their original, legacy hardware. Not really an issue when you’re in video/digital preservation and your entire life is work-arounds, but recently Mac OS has taken to complaining about 32-bit apps:

Despite 32-bit apps providing, near as I can tell, no actual security or compatibility concerns for 64-bit OSes (32-bit apps just can’t take advantage of more than 4GB of RAM), this is a pretty heavy indication that Apple will likely cut off support for 32-bit apps entirely sometime in the not-so-so-distant future. And that will go for, not just Live Capture Plus, but other legacy apps capable of native DV transfer (Final Cut 7, the DVHSCap utility from the FireWire SDK, etc.)

So go get those DV tapes transferred!!!!

Getting Started with BagIt in 2018

Take two!

In December, I hastily wrote an update to an old post about BagIt, the Library of Congress’ open-source specification for hierarchical packaging of files to support safe data storage and transfer. The primary motivation for the update was some issues that the Video Preservation course I work with  encountered with my instructions for installing the bagit-python command-line tool, so I wanted to double-check my process there and make sure I was guiding readers correctly. I also figured that it had been a couple years and I could write about new implementations while I was at it. A cursory search turned up a BagIt-for-Ruby library, so I threw that in there, posted, *then* opened up a call for anything I’d missed.

Uhhhh –  I missed a lot.

It was at this point, as I sifted through the various scripts, apps, tools and libraries that create bags in some way that I realized I had lost the thread of what I was even trying to summarize or explain.

Every piece of software using the BagIt spec ever? That, happily, is a fool’s errand – the whole point of the spec is that it’s super easy and flexible to implement, no matter how short the script. So…there’s a lot of implementations.

Every programming language with an available port/module for creating bags according to the BagIt spec? Mildly interesting for hybrid archivist/developers, but probably of less practical use for preservation students, or the average user/creator just trying to take care of their own files, or archivists that are less programming-inclined. A Ruby module for BagIt is objectively cool and useful – for those working/writing apps and scripts in Ruby. Given that setting up a Ruby dev environment requires some other command-line setup that I didn’t even get into, someone’s likely not heading straight to that module right out of the gate.

“Using BagIt” was/is the wrong framework. Too broad, too undefined, and as Ed Summers pointed out, antithetical to the spirit in which a simple, open source specification is made in the first place: to allow anyone to use it, anywhere, however they can – not according to one of four or five methods proscribed in a blog post.

So I am rewriting this post from the mindset, not of “here’s all the forms and tools in which BagIt exists”, but rather, “ok, so I’m learning what a bag is and why’s it useful – how can I make one to get started?”

Because the contents of a specification are terrific and informative, but in my experience nothing reinforces understanding of a spec like a concrete example. And not only that, but one step further – *making* an example. Technical concepts without hands-on labwork or activities to solidify them get lost – and budding digital preservationists told to use the BagIt spec need somewhere to start.

So whether you’re just trying to securely back up your personal files to a cloud service, or trying to get a GLAM institution’s digital repository to be OAIS-compliant, validation and fixity starts at square one. Let me do that as well.

What’s a bag?

Just for refresher’s sake, I’m going to re-post here what I wrote back in 2016 – so that this post can stand alone as a primer:

One of the big challenges in digital archiving is file fixity – a fancy term for checking that the contents of a file have not been changed or altered (that the file has remained “fixed”). There’s all sorts of reasons to regularly verify file fixity, even if a file has done nothing but sit on a computer or server or external hard drive: to make sure that a file hasn’t corrupted over time, that its metadata (file name, technical specs, etc.) hasn’t been accidentally changed by software or an operating system, etc.

But one of the biggest threats to file fixity is when you move a file – from a computer to a hard drive, or over a server. Think of it kind of like putting something in the mail: there are a lot of points in the mailing process where a computer or USPS employee has to read the labeling and sort your mail into the proper bin or truck or plane so that it ends up getting to the correct destination. And there’s a LOT of opportunity for external forces to batter and jostle and otherwise get in your mail’s personal space. If you just slap a stamp on that beautiful glass vase you bought for your mother’s birthday and shove it in the mailbox, it’s not going to get to your mom in one piece.

So a “bag” is a kind of special digital container – a way of packaging files together to make sure what we get on the receiving end of a transfer is the same thing that started the journey (like putting that nice glass vase in a heavily padded box with “fragile” stamped all over it).

Sounds great! How do I make a bag?

At its core, all you need to make a bag out of a digital file or group of files is an editor capable of making plain text files (.txt) and an ability to generate MD5 checksums. An MD5 generator takes *any* string of digital information – including an entire file – and encodes it into a 128-bit fingerprint; that is, a 32-character string of seemingly “random” letters and numbers. Running an MD5 generator on the same file will always produce the same 32-character string. If the file changes in some way (even some change or edit invisible to the user), the MD5 string will change as well. So this process of generating and checking strings allows you to know whether a file is exactly the same on the receiving end of a transfer as it was at the beginning.

BagIt bags facilitate this process via a “tag manifest” – a text file including all the digital files contained in the bag (the “data” in question) and their corresponding MD5 checksums. Packaged together (along with some meta information on the BagIt spec and the bag itself), this all allows for convenient fixity checking.

Convenient, though, in the sense of easing automation. While you *can* put together a bag by hand – generating checksums for each file, copying them into text files to create the manifests, structuring the data and manifests together in BagIt’s dictated hierarchy -that is a copy/paste nightmare, and not exactly going to encourage the computer-shy into healthier digipres practice.

This is why simple scripts and tools and apps are handy. Down the line, when you’re creating your own archival workflow, you may want to find or tweak or make your own process for creating bags – but for your first bag, there’s no need to reinvent the wheel.

I’m going to cover intro tools here, for either the command line or GUI user.

Command Line Tools

  1. this Bash scriptA simple shell script by Ed that just requires just two arguments: the directory you want to bag, and an output directory (in which to put the bag).Hit the green “Download” button in the corner of the GitHub page, select the ZIP file, then unzip the result. Move the “bagit.sh” file inside to a convenient/accessible location in your computer.Once in Terminal, you can run this bash script by navigating to wherever you put this script, then executing it with:
    $ ./bagit.sh /path/to/directory /path/to/bag

    or

    $ bash bagit.sh /path/to/directory /path/to/bag

    (the “./” or “bash” commands do the same thing – indicating to the Bash terminal to execute the bagit.sh script)

    The “/path/to/directory” should be a folder containing all the files you want to be in the bag. Then you will specify the output path for the bag with “/path/to/bag”. Both can be accomplished with drag-and-dropping folders from the Finder.

  2. bagit-pythonBagit-python is the Library of Congress’s officially-supported command line utility for making and working with bags. It requires a working Python interpreter on your computer, plus Python’s package manager, “pip”. By default, macOS comes with a Python interpreter (2.7.10), but not pip. So we go to the popular command-line Mac package manager Homebrew to put this all together.Sigh. OK. So one of the reasons this post didn’t come out last week is that, literally in that same time frame, Homebrew went through….something with regards to their Python packages and how they behaved with Python 2.x vs Python 3.x vs the Python installation that comes with your Mac. (they’ve locked/deleted a lot of the conversations and issues now, but it was really the dark side of FOSS projects in there for a bit). I kept trying to check my instructions were correct, and meanwhile, every “$ brew update” was sending my python installs haywire. It seems like they’ve finally settled, but, I’d still now generally recommend giving this page a once-over before working with python-via-homebrew.

But to summarize: if you want to work with Python 3.x, you install a *package* called “python” and then invoke it with python3 and pip3 commands. If you want to use Python 2.x, you install a package called “python@2” and then invoke with either python and pip or python2 and pip2 commands.

…got it?

For the purposes of just using the bagit-python command-line tool, at least, it doesn’t matter whether you choose Python 2.x or 3.x. It’ll work with both. But stick with one or the other through this installation process. So either:

$ brew install python

+

$ sudo pip3 install bagit

or:

$ brew install python@2

+

$ sudo pip install bagit

That’s it! It’s just making sure you have a version of python installed through Homebrew, then use the python package/module installer “pip”to install the bagit-python tool. I highly recommend using admin privileges with “sudo” to globally install and avoid some weird permissions issues that may arise from trying to run python scripts and tools like bagit-python otherwise.

One installed, look over the help page with

$ bagit.py --help

to see the command syntax – and all the features that you can cover! Including using different hash generators (rather than MD5), adding metadata, validating existing bags rather than creating new ones, etc.

*** a note about bagit-java***
If you are using Homebrew and just run

$ brew install bagit

it will install the bagit-java 4.12.3 library and command-line tool. The LOC no longer supports and doesn’t recommend this tool for command line use, and the –help instructions that come with it don’t even actually reflect the command syntax you have to use to make it work. So! This isn’t a recommendation but just a note for Homebrew users who might get confused about what’s happening here.

GUIs

1. Bagger

Again, the LOC’s official graphical utility program for creating and validating bags. Following the instructions from their GitHub repository linked above, you’re going to download a release and then run on macOS by finding and clicking on the “bagger.jar” file (you’ll need a working Java install as well).Inside Bagger, once you choose the “Create a Bag” option, Bagger will ask you to choose a “profile” – these just refer to the metadata fields available for inserting more administrative information about your bag and the files therein, within the bag itself. These are really useful for keeping metadata consistent if you’re creating a whole bunch of bags, but choosing “<no profile>” is also totally acceptable to get started (you can always re-open bags and insert more metadata later!)”Create Bag in Place” is also a useful option if you don’t want (or digital storage limitations even *prevents*) to have two copies of your files (the original + the copy inside the “data” folder in your bag). Rather than copying and creating the bag in a new directory elsewhere, it’ll just move around/checksum/restructure the files according to the BagIt spec within the original directory.

2. Exactly

A GUI developed by AVP and the University of Kentucky that combines the bagging process with file transfer – which is the presumed end-goal of bagging in any case. To that end, Exactly doesn’t “bag in place” – you always have to pick a source file/directory (or sources – Exactly will bundle them all together into one bag) and a destination for the resulting, created bag. Like Bagger, you can also add metadata via custom-designed fields or import templates/profiles. Added support for FTP or SFTP transfers to remote servers (in addition to locally-attached network storage units like a Samba or Windows share) make it a simple starter option for file delivery.

***************************

If you’re getting started with the BagIt spec, these are the places I’d begin. But as to what implementation *you* can come up with from there, based on your personal/institutional needs…that’s up to you!

Mastodon and the DigiPres Club

A little while back, I changed my display name on Twitter. Besides my actual name (all your anonymity is belong to us), I added on a second handle: @The_BFOOL@digipres.club.

(Not that anyone cares, but if you’ve ever wondered what the deal with the “BFOOL” handles is, it’s a reference to my well-hidden other blog, and a vestige of the brief moment I thought I might make an actual professional go of it as a film writer. I remain latched on to it now out of nostalgia for the anonymous days of internet yore, where signing up for a forum meant coming up with a cool hacker handle rather than providing a UUID that linked my secret enthusiasm for ASMR to my credit score)

What is digipres.club? And why are myself and some other, potentially familiar-to-you users promoting it on Twitter? I wanted to offer a brief primer, and perhaps a few thoughts on community-driven social media and what this platform potentially means to me (feel free to skip out before that part).

In the shortest, but perhaps not simplest, of terms, digipres.club is a Mastodon instance. What is Mastodon? In its own words, Mastodon is a “free, open-source, decentralized microblogging network.”

Sure. 👌

In its first surge of publicity back in the spring of 2017, most of the techie buzz around Mastodon billed it as “open-source Twitter.” And that’s still probably the quickest way to frame it – the interface looks and feels pretty much exactly like Tweetdeck or many other popular Twitter clients, so if you’ve spent any time around “the birdsite” (as it is not-so-fondly known in the Masto-verse), you’ll get the basic hang of Mastodon almost immediately. You’ll write and post relatively short status updates/posts (“toots” in Mastodon vocabulary, which may seem twee but look how quickly we all got used to literally “tweeting”), and share those posts, including links, photos, videos, etc. etc., among a group of followers who will see your thoughts pop up on their timeline.

Or, er, multiple timelines. This tends to be where people get thrown off with Mastodon, because it’s where the idea of a decentralized, “federated” social platform comes in.

No, not that one. (except yes kind of also that one)

In essence, anyone with access to a server can run Mastodon on it (that’s the free and open-source part). That server (“instance”) hosts “individual user accounts, the content they produce, and the content they subscribe to”: posts (toots), images, video. It’s the same model as Twitter, Facebook, Google, Snapchat – only instead of a tech company hosting and distributing your content, it’s likely one person or maybe a small group of people, working at a drastically reduced scale.

*But* – even if different people are hosting/administrating them, Mastodon instances can still talk to each other, because they’re running the same software, speaking the same language. That’s the idea of “federation”. Any user account on Mastodon thus has two components – it identifies both their handle/username, plus the name of the instance that account and its content are originally hosted on, e.g. @The_BFOOL@digipres.club. Each user is also then going to have three major timelines:

  • your “Home” timeline – only shows posts from other users that you have specifically subscribed to/followed
  • your “Local” timeline – all the public posts from the server/instance your user account is hosted on (e.g. all the posts on digipres.club)
  • your “Federated” timeline – all the public posts from all the Mastodon servers/instances your local instance is connected to

https://images.duckduckgo.com/iu/?u=https%3A%2F%2Fzdnet3.cbsistatic.com%2Fhub%2Fi%2Fr%2F2017%2F04%2F13%2F54baf8dc-b702-4860-b04a-dd0f72366e6c%2Fresize%2F770xauto%2F495bbbb96d5ac9f438fee8478c409b31%2Fmastodon-dashboard-eileen-brown-zdnet.png&f=1Again, that last one is the trickiest to understand exactly what it’s showing you. I believe that instances are not technically “federated” to another until a user on one instance – *any* user – follows a user on another instance. At that point, public posts from the second instance’s Local timeline start showing up in the first instance’s Federated timeline.

I’ll be honest, I don’t look at/use the Federated timeline much. I think the idea is you can use it to find other/new people by a sort of “friend-of-a-friend” recommendation – these are people followed by people YOU’VE chosen to follow, or that belong to your local community instance – so maybe you’ll be interested in what they have to say. It is super fascinating to occasionally take a peek – particularly if you’re federated with one of the bigger, general instances, like mastodon.social (the “flagship” server/instance, led and maintained by Mastodon’s creator, Eugen Gargron).

But most of the time, I find the strength of Mastodon is in the local timeline/instance. These are opportunities, like the web forums of old, for communities to build and define themselves – each host has to decide what their instance is for, what makes it unique enough for people to choose to make a user account on *this particular* Mastodon instance rather than another.

(To be clear, the whole federated angle also means you can easily sign up for multiple Mastodon accounts, on different instances, if you’re interested in different communities – for example, I regularly check @The_BFOOL@digipres.club, but I also have @The_BFOOL@octodon.social, another general-purpose social instance that was where I first tried out the platform. That means, in its infrastructure, Mastodon is a kind of cross between Twitter and email – any one of us could have both an @gmail.com account and an @yahoo.com  account, which can talk to each other and everyone else on email despite being hosted in different places)

To me, this infrastructure combines the best parts of Twitter – self-determination (the ability to create your own gaggle of thought-provoking voices), and a network of professional questions posed and answered in a quick, informal setting, encouraging participation and leveling the playing field from the documented social biases of peer-reviewed publications and organizations – while eliminating the worst bits: ads, development priority on UI updates over functionality, uhhhhh Nazis (and/or people talking about Nazis, which to state the obvious is far far less immoral/unethical/illegal than *being* or *promoting* a Nazi, but is exhausting).

That last bit is actually kind of crucial – many users (esp., it seems, various minority communities) flocked to Mastodon because it has way more sophisticated settings for moderation than Twitter, both on the side of the administrator who hosts/runs the server (who can block users from their instance, close off federation from instances that host hateful or illegal content) and, critically, the user (there is a fantastic, easy “content-warning” system that lets users sensitively publicly post potentially traumatizing/triggering content “behind” a warning; allowing other users to choose to see that content rather than have it shoved in their faces; also, there’s multiple permissions available for every single post, beyond just direct messaging with one user and posting publicly for the entire world to see). The controls can, again, take a little getting used to – because of the way permissions are set up, it can be disconcerting to see posts intended as a private conversation with one other user appear in your timelines alongside totally public content (but rest assured, as long as you chose the right setting, you and the intended recipient are the only ones seeing it).

Like with so many other open-source projects, it’s about taking a good idea (online social networking is, removed from the many many problems it has come to be identified with in execution, not an inherently bad concept) and removing some degree of tech capitalism from the equation, giving more customization and control back to individual users and communities. This whole concept is nothing new: internet history is littered with similar projects that have come and gone based on the social technology/platform du jour – forums, instant message chat clients, etc. etc; what’s new is the current “microblogging” appeal, mixing text, links, images, etc., in quickly-digestible, constantly-updating fashion.

Here’s the thing: that does not mean, at all, that Mastodon instances/communities don’t or won’t have their own problems. Building a community – online or anywhere else – also should mean caring and protecting for that community. An enforceable Code of Conduct or at least community guidelines, people who are willing to take on the task of administrating not just technical systems and software, but *people* – I believe that wherever people come together in public conversation, some thought needs to be put into these things to create a truly lasting, fair, empathetic, and constructive community.

So, to circle back to the original question. Maybe now you better understand from a technical standpoint what a Mastodon instance is, but that doesn’t really answer…what is digipres.club?

I made the first attempt at digipres.club – a Mastodon instance for users who, in my head, wanted to have professional-but-informal conversations about digital preservation – last summer. It was only a week or two before I realized I was in over my head as a sysadmin (I wish I had taken screengrabs of some of my Terminal screens, but it wasn’t pretty). I’m a firm believer in learning-by-doing when it comes to tech, but just speaking for myself and my own ability, this was a step too far, considering the end goal. Administering a server/Mastodon instance means taking some responsibility for other people’s content. I recognized my technical ability/understanding wasn’t there yet to properly commit to that. And when I got the first monthly bill for the Digital Ocean server droplet I was hosting on, I realized that I really didn’t know what I was doing even in terms of choosing a sustainable hosting option.

So I shut down that whole instance (there were only a handful of users onboard at that point, but I do apologize for not giving more notice about that). But as more people in the Twitter digipres community seemed to hear about/get interested in Mastodon, Joshua Ng, who works in IT at the Asian Film Archives in Singapore, decided to take another crack at digipres.club. I gladly gave over the domain name. Joshua is and will be a far more talented sysadmin than I am – the site has been up for a month or two now, federation and authentication for logging in from mobile app clients are functioning properly, and there’s almost 100 users signed up for the instance – all way more than I could say about my aborted attempt!

That said – and I think Joshua would agree here – digipres.club is also very much a work in progress. There’s a great starter description for the instance, but I know some of the same people in this nebulous online-archives/libraries/information sphere have expressed interest in a more generalized Mastodon community for GLAM workers. Personally, my interpretation of “digital preservation” is that it’s a very very wide umbrella and can encompass pretty much literally everything GLAM workers do – it’s a digital world, so I feel like all preservation activities – and that includes access activities, because what is preservation without access – either is or leads to digital preservation. But, this is the whole point of decentralizing and community-building; some people can spin off into another instance if digipres.club is not what they’re looking for, and as a member of digipres.club I can choose to connect with them from a distance, move over and join them directly, join both, do whatever I want. Free as in libre!

If it’s really going to keep going, building a social online digipres community requires community support. That’ll mean things like finding ways to financially support Joshua (hosting costs money – we can’t completely remove ourselves from the global tech market here). It might mean things like establishing posting guidelines or a CoC, and finding people willing to be community managers and enforce those guidelines. Asking someone to be both sysadmin and community manager, solo, to have responsibility over tech *and* people, is a lot – it requires different skills, which one person may or may not have!

So the next and last question is….do you want to come help us figure it out? The DigiPres Club is waiting!