Category Archives: ideas

Lessons from organizing OE Global 2020

Open Education Global Conference is a yearly conference for the Open Education community, that a small non-profit OE Global, co-organizes yearly with a local host institution. For 2020 we planned to hold the conference in mid-November in Taipei, Taiwan. But then Covid happened and we’ve decided to make it a virtual event. 

We’ve had ~125 online sessions, and additional 50 posters, presentations that were delivered asynchronously. It happened over 5 days (Mon-Fri) and across all timezones (from Taiwan, Europe, and across the Americas).

In this blog post, I’ll share some lessons that I learned, mostly from technology and related processes. As it was a big undertaking with more than 20 people involved on the organizing side, I only had to focus on my part of responsibilities.

Some technology works well

Presentation side

For our real-time presentations, we used Zoom in normal ‘meeting’ mode. With attendees joining from everywhere in the world, I’ve rarely seen any problems. People know how to use Zoom and they have a good enough camera and microphone. We also used a machine based auto-captioning service by rev.com to generate real-time subtitles. It’s about 90% accurate and it turned out to be quite a good addition to the presentations.

What was most surprising was how cheap this part of the stack was. Zoom is about 15 USD/month and rev.com is additional an 20 USD. I’ve looked into their competition (mostly Google Meet and Jitsi) and nobody could beat them on features or pricing. Google doesn’t seem to even offer a competing solution (I can’t just easily buy 5 seats for one month).

I also evaluated Jitsi and BigBlueButton, but they both required a lot of server infrastructure at the worse result for the end-user. The reason is that when you’re running a global conference where your participants are connecting from different continents you now have to think about peering servers, network latencies and how do you globally distribute your servers. This isn’t easy or cheap. So we’d have to go with a commercial vendor or develop this expertise in house.

Discussion platform and a shared space

For the online discussion platform, we’ve chosen Discourse. It turned out to be a good decision, but the flexibility and complexity of the platform made it somewhat challenging at the time. Through the planning process, we’ve managed to configure it to our liking and import all the data and users with a series of python scripting and a custom Discourse plugin that took care of the presentation of the program.

Developing plugins for Discourse is both a pleasure and a very frustrating experience. It’s a mixture of (older version of) Ember.js on the frontend and Ruby on Rails on the backend. The issue is that you’re writing a plugin into mostly undocumented API of Discourse. So in the end you have to just go and start reading the source of the Discourse and be a bit creative in how you approach your code. I think it’s worth it, but next time I’ll have to invest more time in just learning Rails so I can figure out better integrations.

I’ve considered going with Sched, as we’ve used it in the past, but I decided that it wasn’t flexible enough for our needs. I also looked into developing it on top of WordPress, but I didn’t find any solutions for communities that seemed like it would work for us.

Before the conference

Since we had a lot of registrations at 25-50 USD per ticket, I’ve decided to go with Stripe, WordPress, and Gravity-forms with Stripe add-on. For the most part, it worked very well and Stripe made the whole credit card integration easy. After working with Eventbrite in the past, I’m happy with the choice as it gave us much more flexibility with lower costs. (We also had WP+Gravity Forms stack pre-built from previous years).

For our Call for Presentations, we’ve used Easychair. It’s a kind of specialized software that is mostly used in space of organizing academic conferences. I’m not a big fan, but at our scale and number of reviewers, all other options seemed worse or much more expensive. We get about 200 presentation proposals that we distribute to 50 reviewers.

Archiving the conference

We’ve recorded most of the sessions in zoom and the recordings were available in Zoom in a matter of minutes after the session ended. We’ve then transferred these recordings to YouTube and pasted links back into our Discourse.

I’m not entirely happy with using YouTube as a video hosting solution. It’s a definite trade-off in terms of privacy and control. At the same time, using any other vendor is significantly more expensive. Hosting it ourselves would also be a costly and non-trivial thing to do. This is something that I’d like to revisit for next year to figure out if there are any better options.

Some technology is less friendly

Building the schedule

Building a schedule for 130 sessions across 5 days, multiple tracks and basically, all time zones end up being a very complex issue. I wish there was better software than Google Spreadsheets that we could use for this. I know there are schedule solvers out there, but I have to find one that would make sense for us (at a sensible price point).

Connecting all the technologies

Our back-office stack is Google Workspace, so a lot of Google Spreadsheets, mixed with Slack, Mailgun, and Python scripts. What I found is that it takes a lot of effort to make good workflows where input in one system (e.g. registration form) triggers a welcome email from another system. If I were building a Software as a Service (SaaS) system, I would make it part of the onboarding process. But with the conference, the system is more of a drip-email campaign. At some point, you release information to everyone, but if a person registered after that date, you then need to both back-fill old information as well as send them a new one.

Onboarding suddenly matters a lot

Challenge with moving from in-person conference to online-only, is that your existing workflows assume in-person experience, but your attendees expect an online experience. Things that you can improvise in-person (e.g. write a new name tag with a pen), becomes a support issue in an online context. If they for some reason didn’t do something correctly online (or their spam filter blocked an email), you don’t have a good way to instantly fix it. It becomes a game of emails and troubleshooting to figure out what’s the core issue.

To make this better next time, I’ll try to sketch all parts of the onboarding process and what are potential issues. I guess we’ll also want to track churn and similar concepts from traditional SaaS businesses. 

 Hallway track still needs work

There are just fewer opportunities to connect with other attendees and ask good questions. We’ve tried a lot of things – different thread stats and Discourse, drop-in zoom sessions, interactive tools. They kind of work, but they still don’t replicate the serendipity of meeting new people and listening to different group conversations during coffee breaks.

I’m hopeful we’ll figure out how to do this in the next few years. It might require a completely new way of thinking and organizing our time.

It’s now harder to be at a conference

Being at the conference is usually associated with a deep dive into a field with intense connection and learning over a few days. When you’re attending from your home office, there’s this tension of trying to still do your work, be with family and also try to follow the conference. I think that trying to do all of these things at once just isn’t possible. I hope we can figure out how to help people take this time for their professional development, without feeling guilty about not attending to some other things.

Overall

Most of all, I’m just excited and surprised how well it all worked. We’ve managed to bring our conference to people and communities that could never afford (or be allowed to) travel to our in-person event. I’m excited at what this means for the future of our field.

Getting tailwind css to work with Roots Sage 9 theme

I’m really enjoying how easy makes Sage WordPress Theme development. It’s very different in the beginning, but it soon feels a lot more like working in Django instead of WordPress.

At the same time, I’ve also been trying to use tailwind for this project. To make it work in production, you need to configure a few more settings for purgecss that official instructions don’t cover. The trick is that you need to define a TailwindExtractor that doesn’t strip out md:underline, hover:underline and similar color prefixed CSS classes.

Notice that I also exclude a few of external packages, so that purgecss doesn’t strip their CSS rules.

// webpack.config.optimize.js

'use strict'; // eslint-disable-line

const { default: ImageminPlugin } = require('imagemin-webpack-plugin');
const imageminMozjpeg = require('imagemin-mozjpeg');
const UglifyJsPlugin = require('uglifyjs-webpack-plugin');
const glob = require('glob-all');
const PurgecssPlugin = require('purgecss-webpack-plugin');

const config = require('./config');

class TailwindExtractor {
  static extract(content) {
    return content.match(/[A-Za-z0-9-_:\/]+/g) || [];
  }
}

module.exports = {
  plugins: [
    new ImageminPlugin({
      optipng: { optimizationLevel: 7 },
      gifsicle: { optimizationLevel: 3 },
      pngquant: { quality: '65-90', speed: 4 },
      svgo: {
        plugins: [
          { removeUnknownsAndDefaults: false },
          { cleanupIDs: false },
          { removeViewBox: false },
        ],
      },
      plugins: [imageminMozjpeg({ quality: 75 })],
      disable: (config.enabled.watcher),
    }),
    new UglifyJsPlugin({
      uglifyOptions: {
        ecma: 5,
        compress: {
          warnings: true,
          drop_console: true,
        },
      },
    }),
    new PurgecssPlugin({
      paths: glob.sync([
        'app/**/*.php',
        'resources/views/**/*.php',
        'resources/assets/scripts/**/*.js',
        'node_modules/vex-js/dist/js/*.js',
        'node_modules/mapbox-gl/dist/*.js',
        'node_modules/slick-carousel/slick/slick.js',
      ]),
      extractors: [
        {
          extractor: TailwindExtractor,
          extensions: ["html", "js", "php"],
        },
      ],
      whitelist: [
      ],
    }),
  ],
};

Using PurgeCSS with Ember.js

After watching talks about Functional CSS at Ember Map, I started looking into starting to usetailwind for my future projects. The way tailwind works is that it generates a lot of CSS classes that you then use purgecss to remove. So I decided to try it on some of my existing Ember.js projects.

I ran it on Open Education Week and Val 202 web site. Both are built on top of Zurb Foundation. Here are results:

Open Education Week:
Before: 84.3 KB (14.91 KB gzipped)
After: 31.05 KB (7.04 KB gzipped)
A 52% reduction in gzipped size!

Val 202:
Before: 156.48 KB (24.5 KB gzipped)
After: 107.68 KB (18.45 KB gzipped)
A 24% reduction in gzipped size!

Not a bad improvement, since we get it almost for free, just by including it in the build pipeline. The only downsize is probably a few seconds longer production build time.

Using it in your Ember.js project

First install dependencies:

ember install ember-cli-postcss
yarn add --dev @fullhuman/postcss-purgecss

Then add it to your ember-cli-build.js

const EmberApp = require('ember-cli/lib/broccoli/ember-app');
const purgecss = require('@fullhuman/postcss-purgecss');

module.exports = function (defaults) {
  const app = new EmberApp(defaults, {
    postcssOptions: {
      filter: {
        enabled: true,
        plugins: [
          {
            module: purgecss,
            options: {
              content: ['./app/**/*.hbs', './app/**/.js'],
            }
          }
        ]
      }
    }
  });
  return app.toTree();
};

Finally, open your styles/app.scss or styles/app.css and modify it so purgecss doesn’t remove any of your custom CSS. 

// import framework like Foundation or Bootstrap here

/*! purgecss start ignore */

// your css and ember-addon @imports go here

/*! purgecss end ignore */

That’s all. If this isn’t enough, you can also set additional whitelistPatterns and whitelistPatternsChildren to keep additional CSS rules in your final build.

Thanks goes to @samselikoff for pointing me in the right direction to make this work.

Logging DRF Serializer errors into Sentry

For one of my Ember.js apps, I have a bit of a too complex Form flow. While I’m working on simplifying frontend, I wanted a way to easily log validation errors that users received. In addition to debugging, this helps me improve the labels and instructions on the form itself.

Backend in this case if Django Request Framework driven with JSON API. The idea is to log all validation errors and redirect them to Sentry in Debug mode:

First we declare a custom DRF Exception handler the uses JSON API exception handler and copies the data to sentry:

// exceptions.py
from rest_framework_json_api.exceptions import exception_handler
from raven.contrib.django.raven_compat.models import client

from app.serializers import SubmissionSerializer

def custom_drf_exception_handler(exc, context):
    response = exception_handler(exc, context)

    if isinstance(context.get('view').get_serializer_class(), SubmissionSerializer):
        client.captureMessage('Form Submission Validation Error', level='debug', extra=exc.get_full_details())


    return response

And we also have to configure DRF to re-route errors through it:

// settings.py
REST_FRAMEWORK = {
	..
	'EXCEPTION_HANDLER': 'app.exceptions.custom_drf_exception_handler',
	..
}

And that’s it. The end result is that when serialization error is triggered, we now get a nice error log in Sentry:

Screenshot of validation errors from Sentry

Is it worth recording videos at Conferences and Meetups?

We’re at the most busy point of organising WebCamp Ljubljana 2016. One of the questions we had to ask ourselves is – should we record the talks? It seemed that in previous years, did it because Kiberpipa and everyone else too. As organizers, we want to question decisions made in previous years. This is why we decided to investigate our decision to record the conference.

The effort required

We have 3 concurrent tracks. That means we need 3 semi-professional cameras with tripods, external mics and all the electricity. To get all of this together, somebody has to prepare and source the equipment. The on the conference day, 3 people are recording and you usually need 1 person extra as a support. Then after everything is over, it has to be edited and published. A few more days of work.

How did we do in previous years?

I looked through the stats for the videos. I didn’t know what to expect, but our most viewed video had 650+ views and the second one over 500. Then it’s slowly dropping off but a number of videos with 50+ or 100+ is still not too bad.

The real impact

One of the questions was – isn’t all this already taught through blogs, books and other conference recordings? And I believe that this just isn’t true. We’re still recording only a small amount of tech content. In addition to that, some speakers resonate well with us, while others we just can’t stand. Numbers show that people are watching and sharing the videos and that we help speakers have longer lasting impact.

So that made it really easy to decide to invest the effort on recording this years WebCamp again.

flickr photo shared by Thomas Hawk under a Creative Commons ( BY-NC ) license