This material is in early beta: over 300 suggestions and corrections are waiting to be folded in, some quite significant. Changes should be in place by July 2018, at which times printed copies and downloadable electronic copies will be made available.

Teaching Online

After reading this chapter, you will be able to

  • Explain why expectations for massive online courses were unrealistic.
  • Explain what personalized learning is, and how the term is sometimes misused.
  • Describe several key practices of successful automated courses.
  • Summarize at least four features that make instructional videos engaging.
  • Describe the pros and cons of using off-the-shelf style checking tools to give students feedback on their programs.

If you use robots to teach, you teach people to be robots.
— variously attributed

Technology has changed teaching and learning many times. Before blackboards were introduced into schools in the early 1800s, for example, there was no way for teachers to share an improvised example, diagram, or exercise with an entire class at once. Combining low cost, low maintenance, reliability, ease of use, and flexibility, blackboards enabled teachers to do things quickly and at scale that they had only been able to do slowly and piecemeal before. Similarly, the hand-held video camera revolutionized athletics training, just as the tape recorder revolutionized music instruction a decade earlier.

Many of the people pushing the Internet into classrooms don’t know this history, and don’t realize that it is just the latest in a long series of attempts to use machines to teach [Watt2014]. From the printing press through radio and television to desktop computers and mobile devices, every new way to share knowledge has produced a wave of aggressive optimists who believe that education is broken and that technology can fix it. However, ed tech’s strongest advocates have often known less about “ed” than they do about “tech”, and have often been driven more by the prospect of profit than by the desire to improve learning.

Today’s debate is often muddied by the fact that “online” and “automated” don’t have to be the same thing. Live online teaching can be a lot like leading a small-group discussion. Conversely, the only way to teach several hundred people at a time is to standardize and automate assessment; the learner’s experience is largely the same from whether the automation uses software or a squad of teaching assistants working to a tightly-defined rubric.

This chapter therefore looks at how the Internet can and should be used to deliver automated instruction, i.e., to teach with recorded videos and assess via automatically-graded exercises. The next chapter will then explore ways of combining automated instruction with live teaching delivered either online or in person.


The highest-profile effort to reinvent education using the Internet is the Massive Open Online Course, or MOOC. The term was invented by David Cormier in 2008 to describe a course organized by George Siemens and Stephen Downes. That course was decentralized, but the term was quickly co-opted by creators of centralized, automated, video-based courses. The former are now sometimes referred to as “cMOOCs” to distinguish them from the less threatening “xMOOCs” offered by institutions that see the lack of a grading scheme as the first step toward anarchy in the streets. (The latter type of course is also sometimes called a “MESS”, for Massively Enhanced Sage on the Stage.)

Two strength of the MOOC model are that learners can work when it’s convenient for them, and that they have access to a wider range of courses, both because the Internet brings them all next door and because online courses typically have lower direct and indirect costs than in-person courses. Five years ago, you couldn’t cross the street on a major university campus without hearing some talking about how MOOCs would revolutionize education—or destroy it, or possibly both.

But MOOCs haven’t been nearly as effective as their more enthusiastic proponents claimed they would be [Ubel2017]. One reason is that recorded content is ineffective for many novices because it cannot clear up their individual misconceptions (Chapter 2): if they don’t understand an explanation the first time around, there usually isn’t a different one on offer. Another is that the automated assessment necessary in order to put the “massive” in MOOC only works well at the lower levels of Bloom’s Taxonomy. It’s also now clear that learners have to shoulder much more of the burden of staying focused in a MOOC, and that the impersonality of working online can demotivate people and encourage uncivil behavior.

[Marg2015] examined 76 MOOCs on various subjects, and found that their instructional design was poor, though organization and presentation of material was good. Closer to home, [Kim2017] studied 30 popular online coding tutorials, and found that they largely teach the same content the same way: bottom-up, starting with low-level programming concepts and building up to high-level goals. Most require learners to write programs, and provide some form of immediate feedback, but this feedback is typically very shallow. Few explain when and why concepts are useful (i.e., they don’t show how to transfer knowledge) or provide guidance for common errors, and other than rudimentary age-based differentiation, none personalize lessons based on prior coding experience or learner goals.

Personalized Learning

Few terms have been used and abused in as many ways as personalized learning. To most ed tech proponents, it means dynamically adjusting the pace or focus of lessons based on learner performance, which in practice means that if someone answers several questions in a row correctly, the computer will skip some of the subsequent questions.

Doing this can produce modest improvements in outcomes, but better is possible. For example, if many learners find a particular topic difficult, the teacher can prepare multiple alternative explanations of that point—essentially, multiple paths forward through the lesson rather than accelerating a single path—so that if one explanation doesn’t resonate, others are available. However, this requires a lot more design work on the teacher’s part, which may be why it’s a less popular approach with the tech crowd.

And even if it does work, the effects are likely to be much less than some of its advocates believe. A good teacher makes a difference of 0.1–0.15 standard deviations in end-of-year performance in grade school [Chet2014] (see this article for a brief summary). It’s simply unrealistic to believe that any kind of automation can outdo this any time soon.

So how should the web be used in teaching and learning tech skills? From an educational point of view, its pros and cons are:

  • Learners can access far more information, far more quickly, than ever before—provided, of course, that a search engine considers it worth indexing, that their internet service provider and government don’t block it, and that the truth isn’t drowned in a sea of attention-sapping disinformation.
  • Learners can access far more people than ever before as well—again, provided that they aren’t driven offline by harassment or marginalized because they don’t conform to the social norms of whichever group is talking loudest.
  • Courses can reach far more learners than before too—but only if those learners actually have access to the required technology, can afford to use it, and aren’t being used as a way to redistribute wealth from the have-nots to the haves [McMi2017].
  • Teachers can get far more detailed insight into how learners work—so long as learners are doing things that are amenable to large-scale automated analysis and aren’t in a position to object to the use of ubiquitous surveillance in the classroom.

[Marg2015,Mill2016a,Nils2017] describe ways to take advantage of the positives in the list above while avoiding the negatives:

  • Make deadlines frequent and well-publicized, and enforce them, so that learners will get into a work rhythm.
  • Keep offline all-class activities like live lectures to a minimum so that people don’t miss things because of scheduling conflicts.
  • Have learners contribute to collective knowledge, e.g., take notes together (Section 9.6), serve as classroom scribes, or contribute problems to shared problem sets (Section 5.3).
  • Encourage or require learners to do some of their work in small groups (2–6 people) that do have synchronous online activities such as a weekly online discussion to help learners stay engaged and motivated without creating too many scheduling headaches.
  • Create, publicize, and enforce a code of conduct so that everyone can actually (as opposed to theoretically) take part in online discussions (Section 1.4).
  • Use lots of short lesson episodes rather than a handful of lecture-length chunks in order to minimize cognitive load and provide lots of opportunities for formative assessment. This also helps with maintenance: if all of your videos are short, you can simply re-record any that need maintenance, which is often cheaper than trying to patch longer ones.
  • Remember that, disabilities aside, learners can read faster than you can talk, so use video to engage rather than instruct. The exception to this rule is that video is actually the best way to teach people verbs (actions): short screencasts that show people how to use an editor, step through code in a debugger, and so on are more effective that screenshots with text.
  • Remember that the goal when teaching novices is to identify and clear up misconceptions (Chapter 2). If early data shows that learners are struggling with some parts of a lesson, create extra alternative explanations of those points and extra exercises for them to practice on.

All of this has to be implemented somehow, which means that you need some kind of teaching platform. You can either use an all-in-one learning management system (LMS) like Moodle or Sakai, or assemble something yourself using Slack or Zulip for chat, Google Hangouts or for video conversations, and WordPress, Google Docs, or any number of wiki systems for collaborative authoring. If you are just starting out, then use whatever requires the least installation and administration on your side, and the least extra learning effort on your learners’ side. (I once ran a half-day class using group text messages because that was the only tool everyone was already familiar with.)

The most important thing when choosing technology is to ask your learners what they are already using. Normal people don’t use IRC, and find its arcane conventions and interface offputting. Similarly, while this book lives in a GitHub repository, requiring non-experts to submit pull requests has been an unmitigated disaster, even with GitHub’s online editing tools. As a teacher, you’re asking people to learn a lot; the least you can do in return is learn how to use the tools they prefer.

Points for Improvement

One way to demonstrate to learners that they are learning with you, not just from you, is to allow them to edit your course notes. In live courses, we recommend that you enable them to do this as your lecture (Section 9.6); in online courses, you can put your nodes into a wiki, a Google Doc, or anything else that allows you to review and comment on changes. Giving people credit for fixing mistakes, clarifying explanations, adding new examples, and writing new exercises doesn’t reduce the your workload, but increases engagement and the lesson’s lifetime (Section 6.3).

A major concern with any online community, learning or otherwise, is how to actually make it a community. Hundreds of books and presentations discuss this, but most are based on their authors’ personal experiences. [Krau2016] is a welcome exception: while it predates the accelerating descent of Twitter and Facebook into weaponized abuse and misinformation, most of what was true then is true now. [Foge2005] is also full of useful tips for the community of practice that learners may hope to join.

Freedom To and Freedom From

Isaiah Berlin’s 1958 essay “Two Concepts of Liberty” made a distinction between positive liberty, which is the ability to actually do something, and negative liberty, which is the absence of rules saying that you can’t do it. Unchecked, online discussions usually offer negative liberty (nobody’s stopping you from saying what you think) but not positive liberty (many people can’t actually be heard). One way to address this is to introduce some kind of throttling, such as only allowing each learner to contribute one message per discussion thread per day. Doing this lets those who have something to say to say it, while clearing space for others to say things as well.

One other concern people have about teaching online is cheating. Day-to-day dishonesty is no more common in online classes than in face-to-face settings, but the temptation to have someone else write the final exam, and the difficulty of checking whether this happened, is one of the reasons educational institutions have been reluctant to offer credit for pure online classes. Remote exam proctoring is possible, usually by using a webcam to watch the learner take the exam. Before investing in this, read [Lang2013], which explores why and how learners cheat, and how courses can be structured to avoid giving them a reason to do so.


A core element of cMOOCs is their reliance on recorded video lectures. As mentioned in Chapter 8, a teaching technique called Direct Instruction that is based on precise delivery of a well-designed script has repeatedly been shown to be effective [Stoc2018], so recorded videos can in principle be effective. However, DI scripts have to be designed, tested, and refined very carefully, which is an investment that many MOOC authors have been unwilling or unable to make. Making a small change to a web page or a slide deck only takes a few minutes; making even a small change to a short video takes an hour or more, so the cost to the teacher of acting on feedback can be unsupportable. And even when they’re well made, videos have to be combined with activities to be beneficial: [Koed2015] estimated, “the learning benefit from extra doingto be more than six times that of extra watching or reading.”

[Guo2014] measured engagement by looking at how long learners watched MOOC videos. Some of its key findings were:

  • Shorter videos are much more engaging—videos should be no more than six minutes long.
  • A talking head superimposed on slides is more engaging than voice over slides alone.
  • Videos that felt personal could be more engaging than high-quality studio recordings, so filming in informal settings could work better than professional studio work for lower cost.
  • Drawing on a tablet is more engaging than PowerPoint slides or code screencasts, though it’s not clear whether this is because of the motion and informality, or because it reduces the amount of text on the screen.
  • It’s OK for teachers to speak fairly fast as long as they are enthusiastic.

One thing [Guo2014] didn’t address is the chicken-and-egg problem: do learners find a certain kind of video engaging because they’re used to it, so producing more videos of that kind will increase engagement simply because of a feedback loop? Or do these recommendations reflect some deeper cognitive processes? Another thing this paper didn’t look at is learning outcomes: we know that learner evaluations of courses don’t correlate with learning [Star2014], and while it’s plausible that learners won’t learn from things they don’t watch, it remains to be proven that they do learn from things they do watch.

I’m a Little Uncomfortable

[Guo2014]’s research was approved by a university research ethics board, the learners whose viewing habits were monitored almost certainly clicked “agree” on a terms of service agreement at some point, and I’m glad to have these insights. On the other hand, I attended the conference at which this paper was published, and the word “privacy” didn’t appear in the title or abstract of any of the dozens of papers or posters presented. Given a choice, I’d rather not know how engaged learners are than see privacy become obsolete.

There are many different ways to record video lessons; to find out which are most effective, [Mull2007a] assigned 364 first-year physics learners to online multimedia treatments of Newton’s First and Second Laws in one of four styles:


concise lecture-style presentation.

Extended Exposition:

as above with additional interesting information.


Exposition with common misconceptions explicitly stated and refuted.


Learner-tutor discussion of the same material as in the Refutation.

Refutation and Dialogue produced the greatest learning gains compared to Exposition; learners with low prior knowledge benefited most, and those with high prior knowledge were not disadvantaged.

If you are teaching programming, you will often use screencasts instead of slides, since they have many of the same advantages as live coding (Section 8.3). [Chen2009] offers useful tips for creating and critiquing screencasts and other videos. The figure below shows the patterns they present and the relationships between them.

Patterns for Screencasting from [Chen2009]
Patterns for Screencasting from [Chen2009]

Automatic Grading

Automatic program grading tools have been around longer than I’ve been alive: the earliest published mention dates from 1960 [Holl1960], and the surveys published in [Douc2005,Ihan2010] mention many specific tools by name. Building such tools is a lot more complex than it might first seem. How are assignments represented? How are submissions tracked and reported? Can learners co-operate? How can submissions be executed safely? [Edwa2014a] is an entire paper devoted to an adaptive scheme for detecting and managing infinite loops and other non-terminating code submissions, and that’s just one of the many issues that comes up.

As elsewhere, it’s important to distinguish learner satisfaction from learning outcomes. [Magu2018] switched informal programming labs to a weekly machine-evaluated test for a second-year CS course using an auto-grading tool originally developed for programming competitions. Learners didn’t like the automated system, but the overall failure rate for the course was halved, and the number of learners gaining first class honors tripled. In contrast, [Rubi2014] also began to use an auto-grader designed for competitions, but saw no significant decrease in their learners’ dropout rates; once again, learners made some negative comments about the tool, which the authors attribute to its feedback messages rather than to dislike of autograding.

[Srid2016] took a different approach. They used fuzz testing (i.e., randomly-generated test cases) to check whether learner code does the same thing as a reference implementation supplied by the teacher In the first project of a 1400-learner introductory course, fuzz testing caught errors that were missed by a suite of hand-written test cases for more than 48% of learners, which clearly demonstrates its value.

[Basu2015] gave learners a suite of solution test cases, but learners had to unlock each one by answering questions about its expected behavior before they are allowed to apply it to their proposed solution. For example, suppose learners are writing a function to find the largest adjacent pair of numbers in a list; before being allowed to use the tests associated with this question, they have to choose the right answer to, “What does largestPair(4, 3, -1, 5, 3, 3) produce?” (The correct answer is (5, 3).) In a 1300-person university course, the vast majority of learners chose to validate their understanding of test cases this way before attempting to solve problems, and then asked fewer questions and expressed less confusion about assignments.

It’s common and tempting to use off-the-shelf style checking tools to grade learners’ code. However, [Nutb2016] initially found no correlation between human-provided marks and style-checker rule violations. Sometimes this was because learners violated one rule many times (thereby losing more points than they should have), and other times it was because they submitted the assignment starter code with few alterations and got more points than they should have. The authors modified the autograder’s rules to reflect this, and then weighted infrequent and overly-frequent violations of a particular feature. Unsurprisingly, after all these tweaks there was a stronger positive correlation with manual assessment.

[Buff2015] presents a well-informed reflection on the whole idea of providing automated feedback. Their starting point is that, “Automated grading systems help learners identify bugs in their code, [but] may inadvertently discourage learners from thinking critically and testing thoroughly and instead encourage dependence on the teacher’s tests.” One of the key issues they identified is that a learner may thoroughly test their code, but the feature may still not be implemented according to the teacher’s specifications. In this case, the “failure” is not caused by a lack of testing, but by a misunderstanding of the requirements, and it is unlikely that more testing will expose the problem. If the auto-grading system doesn’t provide insightful, actionable feedback, this experience will only frustrate the learner.

In order to provide that feedback, [Buff2015]’s system identifies which method or methods of the learner’s code are executed by failing tests, so that the system can associate failed tests with particular features within the learner’s submission. The system decides whether specific hints have been “earned” by seeing whether the learner has tested the associated feature enough, so learners cannot rely on hints instead of doing tests.

[Keun2016a,Keun2016b] classified the messages produced by 69 auto-grading tools. They found that these often do not give feedback on how to fix problems and take the next step. They also found that most teachers cannot easily adapt most of the tools to their needs; as with many workflow tools, they tend to bake in their creators’ unconscious or unrecognized assumptions about how institutions work. Their work is ongoing, and their detailed classification scheme is a useful shopping list when looking at tools of this kind.

[Srid2016] discussed strategies for sharing feedback with learners when automatically testing their code. The first is to provide the expected output for the tests—but then learners hard-code output for those inputs (because anything that can be gamed, will be). An alternative is to report the pass/fail results for the learners’ code, but only supply the actual inputs and outputs of the tests after the submission date. This can be frustrating, because it tells learners they are wrong, but not why.

A third option is to provide hashed output, so learners can tell if their output is correct without knowing what the output is unless they reproduce it. This requires a bit more work and explanation, but strikes a good balance between revealing answers prematurely and not revealing them when it would help.

Flipped Classrooms

Fully automated teaching is one way to use the web in teaching; in practice, almost all learning in affluent societies has an online component: sometimes officially, and if not, through peer-to-peer back channels and surreptitious searches for answers to homework questions. Combining live and automated instruction allows instructors to use the strengths of both. In a classroom, the instructor can answer questions immediately, but it takes time for learners to get feedback on their coding exercises (sometimes days or weeks). Online, it can take a long time to get an answer, but learners can get immediate feedback on their coding (at least for those kinds of exercises we can auto-grade). Similarly, exercises have to be more detailed because they’re anticipating questions: teaching live is the intersection of what everyone needs to know (expanded on demand), while teaching online is the union of what everyone needs to know (because you can’t).

The most popular hybrid teaching strategy today is the flipped classroom, in which learners watch recorded lessons on their own, and class time is used for discussion and to work through problem sets. Originally proposed in [King1993], the idea was popularized as part of peer instruction (Section 9.2), and has been studied intensively over the past decade. For example, [Camp2016] compared students who chose to take a CS1 class online with those who took it in person in a flipped classroom. Completion of (unmarked) practice exercises correlated with exam scores for both, but the completion rate of rehearsal exercises by online students was significantly lower than lecture attendance rates for in-person students. Looking at what did affect the grade, they found that the students’ perception of the material’s intrinsic value was only a factor for the flipped section (and only once results were controlled for prior programming experience). Conversely, test anxiety and self-efficacy were factors only for the online section; the authors recommend trying to improve self-efficacy by increasing instructor presence online.

But are lectures worth attending at all? Or should we just provide recordings? [Nord2017] examined the impact of recordings on both lecture attendance and students’ performance at different levels. In most cases the study found no negative consequences of making recordings available; in particular, students don’t skip lectures when recordings are available (at least, not any more than they usually do). The benefits of providing recordings are greatest for students early in their careers, but diminish as students become more mature.

Life Online

[Nuth2007] found that there are three overlapping worlds in every classroom: the public (what the teacher is saying and doing), the social (peer-to-peer interactions between learners), and the private (inside each learner’s head). Of these, the most important is usually the social: learners pick up as much via cues from their peers as they do from formal instruction.

The key to making any form of online teaching effective is therefore to facilitate peer-to-peer interactions. To aid this, courses almost always have some kind of discussion forum. [Vell2017] analyzes discussion forum posts from 395 CS2 students at two universities by dividing them into four categories:


request for help that does not display reasoning and doesn’t display what the student has already tried or already knows.


reflect students’ reasoning or attempts to construct a solution to the problem.


course policies, schedules, assignment submission, etc.

Content clarification:

request for additional information that doesn’t reveal the student’s own thinking.

They found that constructive and logistical questions dominated, and that constructive questions correlated with grades. They also found that students rarely ask more than one active question in a course, and that these don’t correlate with grades. While this is disappointing, knowing it helps set instructors’ expectations: while we might all want our courses to have lively online communities, most won’t.

Learners use forums in very different ways, and with very different results. [Mill2016a] observed that, “procrastinators are particularly unlikely to participate in online discussion forums, and this reduced participation, in turn, is correlated with worse grades. A possible explanation for this correlation is that procrastinators are especially hesitant to join in once the discussion is under way, perhaps because they worry about being perceived as newcomers in an established conversation. This aversion to jump in late causes them to miss out on the important learning and motivation benefits of peer-to-peer interaction.”


[Gull2004] describes an innovative online coding contest that combines collaboration and competition. The contest starts when a problem description is posted along with a correct, but inefficient, solution. When it ends, the winner is the person who has made the greatest overall contribution to improving the performance of the final solution. All submissions are in the open, so that participants can see one another’s work and borrow ideas from each other; as the paper shows, the final solution is almost always a hybrid borrowing ideas from many people.

[Batt2018] described a small-scale variation of this used in an introductory computing class. In stage one, each student submitted a programming project individually. In stage two, students were paired to create an improved solution to the same problem. The assessment indicates that two-stage projects tend to improve students’ understanding, and that they enjoyed the process.

Discussion isn’t the only way to get students to work together online. While the discussion in Chapter 11.3 assumed that grading had to be fully automatic in order to scale to large classes, that doesn’t have to be the case. [Pare2008] and [Kulk2013] report experiments in which learners grade each other’s work, and the grades they assign are then compared with grades given by graduate-level teaching assistants or other experts. Both found that student-assigned grades agreed with expert-assigned grades as often as the experts’ grades agreed with each other, and that a few simple steps (such as filtering out obviously unconsidered responses or structuring rubrics) decreased disagreement even further. And as discussed in Section 5.3, collusion and bias are not significant factors in peer grading.

[Cumm2011] looked at the use of shareable feedback tags on homework; students could attach tags to specific locations in coding assignments (like code review) so that there’s no navigational cost for the reader, and they controlled whether to share their work and feedback anonymously. Students found that tag clouds of feedback on their own work useful, but that the tags were really only meaningful in context. This is unsurprising: the greater the separation between action and feedback, the greater the cognitive load. What wasn’t expected was that the best and worst students were more likely to share than middling students.

Trust, but Educate

The most common way to measure the validity of feedback is to compare students’ grades to experts’ grades, but calibrated peer review can be equally effective [Kulk2013]. Before asking learners to grade each others’ work, they are asked to grade samples and compare their results with the grades assigned by the teacher. Once the two align, the learner is allowed to start giving grades to peers. Given that critical reading is an effective way to learn, this result may point to a future in which learners use technology to make judgments, rather than being judged by technology.

One technique we will definitely see more of in coming years is online streaming of live coding sessions [Haar2017]. This has most of the benefits discussed in Section 8.3, and when combined with collaborative note-taking (Section 9.6) it can come pretty close to approximating an in-class experience.

Looking even further ahead, [Ijss2000] identified four levels of online presence, from realism (where we can’t tell the difference) through immersion (we forget the difference) and involvement (we’re engaged, but aware that it’s online) to suspension of disbelief (where the participant is doing most of the work). Crucially, they distinguish physical presence, which is the sense of actually being somewhere, and social presence, which is the sense of being with others. In most learning situations, the latter is more important, and one way to foster it is to bring the technology learners use every day into the classroom. For example, [Deb2018] found that doing in-class exercises with realtime feedback using mobile devices improved concept retention and student engagement while reducing failure rates.

Hybrid Presence

Just as combining online and in-person instruction can be more effective than either on its own, combining online and in-person presence can outperform either. I have delivered very successful classes using real-time remote instruction, in which the learners are co-located at 2–6 sites, with helpers present, while I taught via streaming video. This scales well, saves on travel costs, and is less disruptive for learners (particularly those with family responsibilities). What doesn’t work is having one group in person and one or more groups remotely: with the best will in the world, the local participants get far more attention.

Online teaching is still in its infancy: [Luxt2009] surveyed peer assessment tools that could be useful in computing education, and [Broo2016] describes many other ways groups can discuss things, but only a handful of these ideas are widely known or used.

I think that our grandchildren will probably regard the distinction we make between what we call the real world and what they think of as simply the world as the quaintest and most incomprehensible thing about us.
— William Gibson


Give Feedback (whole class/20 minutes)

Watch this screencast as a group and give feedback on it. Organize feedback along two axes: positive vs. negative and content vs. presentation. When you are done, have each person in the class add one point to a 2 × 2 grid on a whiteboard (or in the shared notes) without duplicating any points that are already up there. What did other people see that you missed? What did they think that you strongly agree or disagree with? (You can compare your answers with the checklist in Appendix I.)

Classifying Online Coding Lessons (individual/15 minutes)

Use this derivative of [Kim2017]’s rubric for evaluating online coding tutorials to classify your favorite online coding tutorial:


age, educational status, coding experience.


application to authentic tasks, pointers to subsequent knowledge.


variables, arithmetic, logical, conditionals, loops, arrays, functions, objects.


bottom-up, need-driven.


lecture-based, project-based, storyline.


learners write code.


output correctness, code structure, code style.


how to use, when to use, why to use.


additional materials for self-monitoring.

Two-Way Video (pairs/10 minutes)

Record a 2–3 minute video of yourself doing something, then swap machines with a partner so that each of you can watch the other’s video at 4X speed. How easy is it to follow what’s going on? What if anything did you miss?

Viewpoints (individual/10 minutes)

According to [Irib2009], different disciplines focus on different factors affecting the success or otherwise of online communities:


customer loyalty, brand management, extrinsic motivation.


sense of community, intrinsic motivation.


group identity, physical community, social capital, collective action.

Computer Science:

technological implementation.

Which of these perspectives most closely corresponds to your own? Which are you least aligned with?

Helping or Harming (small groups/30 minutes)

This article by Susan Dynarski explains how and why schools are putting students who fail in-person courses into online courses, and how this sets them up for even further failure. Working in small groups, read the article, come up with 2–3 things that schools could do to compensate for these negative effects, and create rough estimates of their per-student costs. Compare your suggestions and costs with those of other groups. How many full-time teaching positions do you think would have to be cut in order to free up resources to implement the most popular ideas for 100 students? As a class, do you think that would be a net benefit for the students or not?

(Budgeting exercises like this are a good way to tell who’s serious about educational change. Everyone can think of things they’d like to do; far fewer are willing to talk about the tradeoffs needed to make change happen.)