Enseñar tecnología en comunidad

Cómo crear y dar lecciones que funcionen
y construir una comunidad docente a su alrededor

Greg Wilson

Taylor & Francis, 2019, 978-0-367-35328-5

Purchase online

Dedicatoría

Para mi madre, Doris Wilson,
que enseñó a cientos de niños y niñas a leer y creer en sí mismos.

Y para mi hermano Jeff, que no vivió para verlo terminado.
"Recuerda, todavía tienes muchos momentos buenos frente a ti."

La traducción de este libro está dedicada a la memoria de Rebeca Cherep de Guber

Todas las regalías de la venta de este libro serán donadas a
MetaDocencia,
una organización basada en trabajo voluntario que enseña
a enseñar de forma efectiva usando prácticas basadas
en evidencia a docentes de habla hispana de todo el mundo.

Las reglas

  1. Sé amable: todo lo demás son detalles.

  2. Recuerda que tu no eres tus estudiantes

  3. que la mayoría de la gente prefiere fracasar que cambiar

  4. y que el 90% de la magia consiste en saber una cosa más (que tu audiencia).

  5. Nunca enseñes sólo/a.

  6. Nunca dudes en sacrificar la verdad por la claridad.

  7. Haz de cada error una lección.

  8. Recuerda que ninguna clase sobrevive el primer contacto con estudiantes

  9. que cada clase es demasiado corta para quien enseña y demasiado larga para quien la recibe

  10. y que nadie tendrá más entusiasmo que tú por tu clase.

Sobre la traducción

Este es el sitio web de la versión en español, aún en proceso de traducción, de Teaching Tech Together de Greg Wilson. La traducción de Enseñar Tecnología en Comunidad es un proyecto colaborativo de la comunidad de R-Ladies y de MetaDocencia en Latinoamérica, que tiene por objetivo traducir al español material actualizado y de calidad para hacerlo accesible a hispanohablantes. Iniciamos la traducción en Marzo del año 2020.

Quienes trabajamos en este proyecto somos (en orden alfabético): Laura Acion, Mónica Alonso, Zulemma Bazurto, Alejandra Bellini, Yanina Bellini Saibene, Juliana Benitez Saldivar, Ruth Chirinos, Paola Corrales, Ana Laura Diedrich, Patricia Loto, Priscilla Minotti, Natalia Morandeira, Lucía Rodríguez Planes, Paloma Rojas, Gabriela Sandoval, Yuriko Sosa y Yara Terrazas-Carafa. La coordinación del trabajo está a cargo de Yanina Bellini Saibene y la edición final a cargo de Yanina Bellini Saibene y Natalia Morandeira.

Malena Zabalegui nos aconsejó sobre el uso de lenguaje no sexista e inclusivo para la realización de esta traducción.

También generamos un glosario y diccionario bilingüe de términos de educación y tecnología a partir del glosario del libro y del listado de términos a traducir (o no) del libro. El desarrollo de este glosario está a cargo de Yanina Bellini Saibene.

Todos los detalles del proceso de traducción se pueden consultar en la documentación del proyecto.

Introducción

En todo el mundo han surgido grupos de base para enseñar programación, diseño web, robótica y otras habilidades a free-range learners. Los grupos existen para que la gente no tenga que aprender estas cosas por su cuenta, pero, irónicamente, sus fundadores/as y docentes están muchas veces enseñándose a sí mismos/as cómo enseñar.

Hay una forma más conveniente. Así cómo conocer un par de cuestiones básicas sobre gérmenes y nutrición te puede ayudar a permanecer sano/a, conocer un par de cosas sobre psicología cognitiva, diseño instruccional, inclusividad y organización comunitaria te puede ayudar a aumentar tu efectividad como docente. Este libro presenta ideas clave que puedes usar ahora mismo, explica por qué creemos que son ciertas y te señala otros recursos que te ayudarán a ir más lejos.

Re-utilización

Partes de este libro fueron originalmente creadas para el programa de entrenamiento de instructores/as de Software Carpentry y todas ellas pueden ser libremente distribuidas y re-utilizadas bajo la licencia Creative Commons Attribution-NonCommercial 4.0 (Appendix 16). Puedes usar la versión online disponible en http://teachtogether.tech/ (versión en inglés) o bien en http://teachtogether.tech/ (versión traducida a castellano) en cualquier clase (gratuita o paga), y puedes citar pequeños extractos bajo el criterio de uso justo, pero no puedes re-publicar largos fragmentos en trabajos comerciales sin permiso previo.

Las contribuciones, correcciones y sugerencias son bienvenidas, y quienes contribuyan serán agradecidos/as cada vez que una nueva versión sea publicada. Por favor consulta Appendix 18 para detalles y Appendix 17 para nuestro código de conducta.

Quién eres

La Section 6.1 explica cómo averiguar quiénes son tus estudiantes. Los cuatro tipos de destinatario/a de este libro son usuarios/as finales docentes: la enseñanza no es su ocupación primaria, tienen poco o ningún conocimiento sobre pedagogía y posiblemente trabajan fuera de clases institucionales.

Emily

está entrenada como bibliotecaria y ahora trabaja como diseñadora web y gestora de proyectos en una pequeña empresa consultora. En su tiempo libre, ayuda a impartir clases de diseño para mujeres que ingresan a la tecnología como una segunda carrera. Ahora está reclutando colegas para dar más clases en su área y quiere saber cómo preparar lecciones que otras personas puedan usar, así como hacer crecer una organización de enseñanza voluntaria.

Moshe

es un programador profesional, cuyos dos hijos adolescentes asisten a una escuela que no ofrece clases de programación. Se ha ofrecido como voluntario para dirigir un club de programación mensual después del horario de clases. A pesar de que expone presentaciones frecuentemente a sus colegas, no tiene experiencia de enseñanza en el aula. Quiere aprender a enseñar cómo construir lecciones efectivas en un tiempo razonable, y le gustaría aprender más acerca de los pros y contras de las clases en línea al propio ritmo de quien las toma.

Samira

es una estudiante de robótica, que está considerando ser docente luego de graduarse. Quiere ayudar a sus pares en los talleres de robótica de fines de semana, pero nunca ha enseñado en una clase antes, y en gran medida siente el síndrome de la impostora. Quiere aprender más acerca de educación en general para decidir si la enseñanza es para ella y también está buscando sugerencias específicas que la ayuden a dar lecciones más efectivamente.

Gene

es docente de ciencias de la computación en una universidad. Ha estado enseñando cursos de grado sobre sistemas operativos por seis años y cada vez se convence más de que tiene que haber una mejor manera de enseñar. El único entrenamiento disponible a través del centro de enseñanza y aprendizaje de su universidad es sobre publicar tareas y enviar evaluaciones en el sistema online de gestión del aprendizaje, por lo que quiere descubrir qué otro entrenamiento podría pedir.

Estas personas tienen una variedad de conocimientos técnicos previos y alguna experiencia previa con la enseñanza, pero carecen de entrenamiento formal en enseñanza, diseño de lecciones u organización comunitaria. La mayoría trabajan con free-range learners y están enfocadas en adolescentes y personas adultas más que en niños/as; todas estas personas tienen tiempo y recursos limitados. Esperamos que nuestro cuarteto use este material de la siguiente manera:

Emily

tomará parte de un grupo de lectura semanal en línea con sus voluntarias.

Moshe

va a cubrir parte de este libro en un taller de fin de semana de un día y estudiará el resto por su cuenta.

Samira

usará este libro en un curso de grado de un semestre que incluirá tareas, un proyecto y un examen final.

Gene

leerá el libro por su cuenta en su oficina o mientras viaja en el transporte público, mientras desea que las universidades hagan más para apoyar la enseñanza de alta calidad.

Qué otras cosas leer

Si estás en apuros o quieres tener un pantallazo de qué cosas cubrirá este libro,[Brow2018] presenta diez sugerencias basadas en evidencias para enseñar computación. También puedes disfrutar:

  • El entrenamiento para instructores/as de las Carpentries, en el cual está basado este libro.

  • [Lang2016] y [Hust2012], que son textos cortos y accesibles, que conectan las cosas que puedes hacer ahora mismo con la investigación que hay detrás de ellas.

  • [Berg2012,Lemo2014,Majo2015,Broo2016,Rice2018,Wein2018b] están repletos de sugerencias prácticas sobre cosas que puedes hacer en tu clase, pero pueden cobrar más sentido una vez que tengas un marco conceptual para entender por qué sus ideas funcionan.

  • [DeBr2015], el cual explica qué es cierto sobre educación al explicar qué cosas no son ciertas, y [Dida2016], que fundamenta la teoría del aprendizaje en psicología cognitiva.

  • [Pape1993], que continúa siendo una visión inspiradora sobre cómo las computadoras pueden cambiar la educación. La excelente descripción de Amy Ko es una síntesis de las ideas de Papert, mejor que la que podría hacer yo, y  [Craw2010] es una compañía provocadora y estimulante a ambos textos.

  • [Gree2014,McMi2017,Watt2014] explica por qué tantos intentos de reformas educativas han fallado a lo largo de los últimos cuarenta años, cómo colegas que trabajan por lucro han explotado y exacerbado la desigualdad en nuestra sociedad, y cómo la tecnología repetidamente ha fracasado en revolucionar la educación.

  • [Brow2007] y [Mann2015], porque no puedes enseñar bien sin cambiar el sistema en el que enseñamos, y no puedes hacer esto por tu cuenta.

Quienes desean material más académico pueden también encontrar gratificantes a  [Guzd2015a,Hazz2014,Sent2018,Finc2019,Hpl2018], mientras que el blog de Mark Guzdial ha sido consistentemente informativo, provocativo y motivador.

Agradecimientos

Este libro no existiría sin las contribuciones de Laura Acion, Jorge Aranda, Mara Averick, Erin Becker, Yanina Bellini Saibene, Azalee Bostroem, Hugo Bowne-Anderson, Neil Brown, Gerard Capes, Francis Castro, Daniel Chen, Dav Clark, Warren Code, Ben Cotton, Richie Cotton, Karen Cranston, Katie Cunningham, Natasha Danas, Matt Davis, Neal Davis, Mark Degani, Tim Dennis, Paul Denny, Michael Deutsch, Brian Dillingham, Grae Drake, Kathi Fisler, Denae Ford, Auriel Fournier, Bob Freeman, Nathan Garrett, Mark Guzdial, Rayna Harris, Ahmed Hasan, Ian Hawke, Felienne Hermans, Kate Hertweck, Toby Hodges, Roel Hogervorst, Mike Hoye, Dan Katz, Christina Koch, Shriram Krishnamurthi, Katrin Leinweber, Colleen Lewis, Dave Loyall, Paweł Marczewski, Lenny Markus, Sue McClatchy, Jessica McKellar, Ian Milligan, Julie Moronuki, Lex Nederbragt, Aleksandra Nenadic, Jeramia Ory, Joel Ostblom, Elizabeth Patitsas, Aleksandra Pawlik, Sorawee Porncharoenwase, Emily Porta, Alex Pounds, Thomas Price, Danielle Quinn, Ian Ragsdale, Erin Robinson, Rosario Robinson, Ariel Rokem, Pat Schloss, Malvika Sharan, Florian Shkurti, Dan Sholler, Juha Sorva, Igor Steinmacher, Tracy Teal, Tiffany Timbers, Richard Tomsett, Preston Tunnell Wilson, Matt Turk, Fiona Tweedie, Martin Ukrop, Anelda van der Walt, Stéfan van der Walt, Allegra Via, Petr Viktorin, Belinda Weaver, Hadley Wickham, Jason Williams, Simon Willison, Karen Word, John Wrenn, y Andromeda Yelton. También estoy agradecido a Lukas Blakk por el logotipo, a Shashi Kumar por la ayuda con LaTeX, a Markku Rontu por hacer que los diagramas se vean mejor, y a toda aquella persona que ha usado este material a lo largo de los años. Cualquier error que permanezca es mío.

Ejercicios

Cada capítulo finaliza con una variedad de ejercicios que incluyen un formato sugerido y cuánto tiempo toma usualmente hacerlos en persona. Muchos pueden ser usados en otros formatos —en particular, si estás avanzando este libro por tu cuenta, puedes todavía hacer muchos de los ejercicios que son destinados a grupos— y siempre puedes pasar más tiempo en ellos que el que es sugerido.

Si estás usando este material en un taller de formación docente, puedes darles los ejercicios que siguen a quienes participen, con uno o dos días de anticipación, para que tengan una idea de quiénes son y cuál es la mejor manera en que les puedes ayudar. Por favor lee las advertencias en la Section 9.4 antes de hacer estos ejercicios.

Altos y bajos (clase completa/5)

Escribe respuestas breves a las siguientes oraciones y compártelas con tus pares. (Si estás tomando notas colaborativas en línea como se describe en la Section 9.7, puedes escribir tus respuestas allí.)

  1. ¿Cuál es la mejor clase o taller que alguna vez hayas tomado? ¿Qué la hacía tan buena?

  2. ¿Cuál fue la peor? ¿Qué la hacía tan mala?

Conócete a tí mismo/a (clase completa/5)

Comparte respuestas breves a las siguientes preguntas con tus pares. Guarda tus respuestas para que puedas regresar a ellas como referencia a la par que avanzas en el estudio de este libro.

  1. ¿Qué es lo que más quieres enseñar?

  2. ¿A quiénes tienes más ganas de enseñarles?

  3. ¿Por qué quieres enseñar?

  4. ¿Cómo sabrás si estás enseñando bien?

  5. ¿Qué es lo que más quieres aprender acerca de enseñanza y aprendizaje?

  6. ¿Qué cosa específica crees que es cierta acerca de enseñanza y aprendizaje?

¿Por qué aprender a programar? (individual/20)

Los/las políticos/as, líderes de negocios y educadores/as usualmente dicen que la gente debe aprender a programar porque sus trabajos del futuro lo requerirán. Sin embargo, como Benjamin Doxtdator ha señalado, muchas de estas afirmaciones están construidas sobre terrenos resbaladizos. Incluso si fueran reales, la educación no debería preparar a la gente para los trabajos del futuro: les debería dar el poder de decidir qué tipos de trabajos hay y de asegurarse que vale la pena hacer esos trabajos. Además, como señala Mark Guzdial, hay realmente muchas razones para aprender cómo programar:

  1. Para entender nuestro mundo.

  2. Para estudiar y entender procesos.

  3. Para ser capaz de hacer preguntas sobre las influencias en nuestras vidas.

  4. Para usar una importante nueva forma de alfabetización.

  5. Para tener una nueva manera de aprender arte, música, ciencia y matemática.

  6. Como una habilidad laboral.

  7. Para usar mejor las computadoras.

  8. Como un medio en el cual aprender resolución de problemas.

Dibuja una grilla de 3 × 3 cuyos ejes estén etiquetados: “baja,” “media,” y “alta” y coloca cada razón en un sector de acuerdo a la importancia que tienen para ti (el eje X) y para la gente a la que planeas enseñar (el eje Y).

  1. ¿Qué puntos están estrechamente alineados en su importancia (i.e. en la diagonal de tu grilla)?

  2. ¿Qué puntos están desalineados (i.e. en las esquinas por fuera de la diagonal)?

  3. ¿Cómo podría afectar esto lo que tú enseñes?

Modelos mentales y evaluación formativa

La primera tarea en la enseñanza es descifrar quiénes son tus estudiantes. Nuestra aproximación está basada en el trabajo de investigadores/as como Patricia Benner, quien estudió cómo las personas progresan de novatas a expertas en la carrera de enfermería  [Benn2000]. Benner identificó cinco etapas de desarrollo cognitivo que la mayor parte de la gente atraviesa de forma bastante consistente. Para nuestros propósitos, simplificamos esta evolución en tres etapas:

Personas novatas

no saben qué es lo que no saben, es decir, aún no tienen un modelo mental utilizable del dominio del problema.

Practicantes competentes

tienen un modelo mental que es adecuado para los propósitos diarios. Pueden llevar a cabo tareas normales con un esfuerzo normal bajo circunstancias normales y tienen algún entendimiento de los límites de su conocimiento (es decir, saben lo que no saben).

Personas expertas

tienen modelos mentales que incluyen excepciones y casos especiales, los cuales les permiten manejar situaciones que están por fuera de lo ordinario. Discutiremos sobre la experiencia o pericia en más detalle en el Chapter 3.

Entonces, ¿qué es un modelo mental? Como el nombre lo sugiere, es una representación simplificada de las partes más importantes de algún dominio del problema; que a pesar de ser simplificada es suficientemente buena para permitir la resolución del problema. Un ejemplo es el modelo molecular de bolas y varillas que se usa en las clases de química de la escuela. Los átomos no son en realidad bolas y las uniones atómicas no son en realidad varillas, pero el modelo permite a la gente razonar sobre los componentes químicos y sus reacciones. Un modelo más sofisticado de un átomo es aquel con una bola pequeña en el centro (el núcleo) rodeada de electrones orbitantes. También es incorrecto, pero la complejidad extra le permite a la gente explicar más y resolver más problemas. (Como con el software, los modelos mentales nunca son finalizados: simplemente son utilizados.)

Presentar a personas novatas un montón de hechos es contraproducente porque aún no tienen un modelo donde ubicarlos. Incluso, presentar demasiados hechos muy pronto puede reforzar el modelo mental incorrecto que han improvisado. Como observó  [Mull2007a] en un estudio sobre video-instrucción para estudiantes de ciencia:

Los/las estudiantes tienen ideas existentes acerca de los fenómenos antes de ver un video. Si el video presenta conceptos de una forma clara, bien ilustrada, los/las estudiantes creen que están aprendiendo, pero no se involucran con el video en un nivel suficientemente profundo como para darse cuenta de que lo que se les ha presentado difiere de sus conocimientos previos Sin embargo, hay esperanza. Se ha demostrado que el aprendizaje aumenta al presentar en un video las concepciones erróneas comunes de los/las estudiantes junto con los conceptos a enseñar, ya que aumenta el esfuerzo mental que los/las estudiantes realizan mientras miran el video.

Tu objetivo cuando enseñes a personas novatas debe por lo tanto ser ayudarles a construir un modelo mental para que tengan algún lugar en el que ordenar los hechos. Por ejemplo, la lección sobre la consola Unix de Software Carpentry introduce quince comandos en tres horas. Eso sería un comando cada doce minutos, lo que parece muy lento hasta que te das cuenta de que el propósito real de la lección no es enseñar esos quince comandos: es enseñar las rutas de acceso, la historia de comandos, el autocompletado con el tabulador, los comodines, los pipes los argumentos de la línea de comando y las redirecciones. Los comandos específicos no tienen sentido hasta que las personas novatas entienden estos conceptos; una vez que lo hacen, pueden empezar a leer manuales, pueden buscar las palabras claves correctas en la web, y pueden decidir si los resultados de sus búsquedas son útiles o no.

Las diferencias cognitivas entre personas novatas y practicantes competentes apuntalan las diferencias entre dos tipos de materiales educativos. Un tutorial ayuda a construir un modelo mental a quienes recién llegan a un determinado campo; un manual, por otro lado, ayuda a practicantes competentes a llenar los baches de su conocimiento. Los tutoriales frustran a practicantes competentes porque avanzan demasiado lento y dicen cosas que son obvias (aunque no son para nada obvios para personas novatas). De la misma manera, los manuales frustran a las personas novatas porque usan jergas y no explican las cosas. Este fenómeno se llama el el efecto inverso de la experiencia  [Kaly2003], y es una de las razones por las que tienes que decidir tempranamente para quiénes son tus lecciones.

Un puñado de excepciones

Una de las razones por las que Unix y C se hicieron populares es que[Kern1978,Kern1983,Kern1988] de alguna manera consiguieron tener buenos tutoriales y buenos manuales al mismo tiempo.[Fehi2008] y  [Ray2014] están entre los otros pocos libros de computación que consiguieron esto; incluso luego de re-leerlos varias veces, no sé cómo lo lograron.

¿Están aprendiendo tus estudiantes?

Mark Twain escribió: “No es lo que no sabes lo que te mete en problemas. Es lo que tienes seguridad de saber y simplemente no es así." Uno de los ejercicios al construir un modelo mental es, por lo tanto, despejar las cosas que no pertenecen al modelo. En sentido amplio, las concepciones erróneas de las personas novatas caen en tres categorías:

Errores fácticos

como creer que Río de Janeiro es la capital de Brasil (es Brasilia). Estos errores generalmente son fáciles de corregir.

Modelos rotos

como creer que el movimiento y la aceleración deben estar en la misma dirección. Podemos lidiar con estos errores haciendo que las personas novatas razonen a través de ejemplos en los que sus modelos den una respuesta incorrecta.

Creencias fundamentales

como por ejemplo “el mundo solo tiene algunos miles de años de antigüedad” o “algunos tipos de personas son naturalmente mejores en programación que otros” [Guzd2015b,Pati2016]. Estos errores están generalmente conectados profundamente con la identidad social del/de la estudiante, por lo que resisten a las evidencias y racionalizan las contradicciones.

La gente aprende más rápido cuando los/las docentes identifican y aclaran los conceptos erróneos de sus estudiantes mientras se está dando la lección. Esto se llama evaluación formativa porque forma (o le da forma a) la enseñanza mientras se está llevando a cabo. Los/as estudiantes no aprueban o reprueban una evaluación formativa. En cambio, la evaluación formativa da, tanto a quien enseña como a quien aprende, retroalimentación sobre qué tan bien les está yendo y en qué se deberían enfocar en los próximos pasos. Por ejemplo, un/una docente de música le puede pedir a un/una estudiante que toque una escala muy lentamente para chequear su respiración. Entonces, el/la estudiante averigua si la respiración es correcta, mientras que el/la docente recibe una devolución sobre si la explicación que acaba de dar fue comprendida.

En resumen

El contrapunto de la evaluación formativa es la evaluación sumativa, que tiene lugar al final de la lección. La evaluación sumativa es como un examen de conducir: le dice a quien está aprendiendo a conducir si ha dominado el tópico y a quien le está enseñando si su lección ha sido exitosa. Una forma de pensar la diferencia entre los dos tipos de evaluaciones es que quien prueba la comida mientras cocina está haciendo evaluaciones formativas, mientras que quien es comensal y prueba la comida cuando se le sirve está haciendo una evaluación sumativa. Desafortunadamente, la escuela ha entrenado a la mayoría de la gente para creer que toda evaluación es sumativa, es decir, que si algo se siente como un examen, le va a jugar en contra resolverlo pobremente. Hacer que las evaluaciones formativas se sientan informales reduce esta ansiedad; en mi experiencia, usar cuestionarios en línea, o donde deba hacerse click, o cualquier cosa semejante parece aumentar la ansiedad, ya que hoy en día la mayoría de la gente cree que todo lo que hace en internet está siendo mirado y grabado.

Para ser útil durante la enseñanza, una evaluación formativa debe ser rápida de administrar (de manera que no rompa el flujo de la lección) y debe tener una respuesta correcta no ambigua (de manera que pueda ser usada en grupos). El tipo de evaluación formativa más ampliamente usado es probablemente el cuestionario de opciones múltiples (COM). Muchos y muchas docentes tienen una mala opinión de ellos, pero cuando están bien diseñados pueden revelar mucho más que si alguien sabe o no algunos hechos específicos. Por ejemplo, supón que estás enseñando a niños y niñas cómo hacer sumas de números con múltiples dígitos [Ojos2015] y les das este COM:

¿Cuánto es 37 + 15?
a) 52
b) 42
c) 412
d) 43

La respuesta correcta es 52, pero las otras respuestas proporcionan información valiosa:

  • Si el/la niño/a elige 42, no entiende qué significa “llevarse" una unidad. (Podría escribir 12 como respuesta a 7+5, pero luego reemplazaría el 1 con el 4 que obtiene de la suma de 3+1.)

  • Si elige 412, está tratando a cada columna de números como un problema separado. Esto sigue siendo incorrecto, pero es incorrecto por un motivo distinto.

  • Si elige 43, entonces sabe que tiene que llevarse el 1, pero lo lleva de vuelta a la columna de donde viene. De nuevo, es un error distinto que requiere de una explicación clarificadora diferente por parte de quien enseña.

Cada una de estas respuestas incorrectas es un distractor plausible con poder diagnóstico. Un distractor es una respuesta incorrecta o peor que la mejor respuesta; “plausible" significa que parece que podría ser correcta, mientras que “poder diagnóstico" significa que cada uno de los distractores ayuda al docente a darse cuenta de qué explicar a continuación a estudiantes particulares.

La variedad de respuestas a una evaluación formativa te guía cómo continuar. Si una cantidad suficiente de la clase tiene la respuesta correcta, avanzas. Si la mayoría de la clase elige la misma respuesta incorrecta, deberías retroceder y trabajar en corregir la concepción errónea que ese distractor señala. Si las respuestas de la clase se dividen equitativamente entre varias opciones, probablemente están arriesgando, entonces deberías retroceder y re-explicar la idea de una manera distinta. (Repetir exactamente la misma explicación probablemente no será útil, lo cual es uno de los motivos por los que tantos cursos por video son pedagógicamente ineficientes.)

¿Qué pasa si la mayoría de la clase vota por la respuesta correcta pero un grupo pequeño vota por las incorrectas? En ese caso, tienes que decidir si deberías destinar tiempo a que la minoría entienda o si es más importante mantener a la mayoría cautivada. No importa cuán duro trabajes o qué prácticas de enseñanza uses, no siempre vas a conseguir darle a todos y todas lo que necesitan; es tu responsabilidad como docente tomar la decisión.

¿De dónde vienen las respuestas incorrectas?

Para diseñar distractores plausibles, piensa en las preguntas que tus estudiantes hicieron o en los problemas que tuvieron la última vez que enseñaste esta temática. Si no la has enseñado antes, piensa en tus propios conceptos erróneos, pregúntale a colegas sobre sus experiencias o busca la historia de tu campo temático: si las demás personas tuvieron los mismos malentendidos sobre tu temática cincuenta años atrás, hay chances de que la mayoría de tus estudiantes aún malentiendan la temática de la misma forma al día de hoy. También puedes hacer preguntas abiertas en clase para recoger las concepciones erróneas sobre los temas que vas a abarcar en una clase posterior, o consultar sitios de preguntas y respuestas como Quora o Stack Overflow para ver con qué se confunden quienes aprenden la temática en cualquier otro lugar.

Desarrollar evaluaciones formativas hace mejor tus lecciones porque te fuerza a pensar en los modelos mentales de tus estudiantes. En mi experiencia, al pensar evaluaciones formativas automáticamente escribo la lección de forma de abarcar los baches y errores más probables. Las evaluaciones formativas, por lo tanto, dan buenos resultados incluso si no son utilizadas (aunque la enseñanza es más efectiva cuando sí se utilizan).

Los COMs no son el único tipo de evaluación formativa: el Chapter 12 describe otros tipos de ejercicios que son rápidos y no son ambiguos. Cualquiera sea la evaluación que escojas, deberías hacer algo que tome un minuto o dos cada 10–15 minutos de manera de asegurarte que tus estudiantes están realmente aprendiendo. Este ritmo no está basado en un límite de atención intrínseco:  [Wils2007] encontró poca evidencia a favor de la afirmación usualmente repetida de que los/las estudiantes solo pueden prestar atención durante 10–15 minutos. En cambio, la guía asegura que, si un número significativo de personas se ha quedado atrás, solo tienes que repetir una pequeña porción de la lección. Las evaluaciones formativas frecuentes también mantienen el compromiso de tus estudiantes, particularmente si se involucran en una discusión en un grupo pequeño. (Section 9.2).

Las evaluaciones formativas también pueden ser usadas antes de las lecciones. Si comienzas una clase con un COM y toda la clase lo contesta correctamente, puedes evitar explicar algo que tus estudiantes ya saben. Este tipo de enseñanza activa te da más tiempo para enfocarte en las cosas que tus estudiantes no saben. También le muestra a tus estudiantes que respetas su tiempo lo suficiente como para no desperdiciarlo, lo que ayuda a la motivación (Chapter 10).

Inventario de conceptos

Con una cantidad de datos suficiente, los COMs pueden ser sorprendentemente precisos. El ejemplo más conocido es el inventario del concepto de fuerza [Hest1992], que evalúa la comprensión sobre los mecanismos básicos newtonianos. Al entrevistar un gran número de participantes, correlacionando sus concepciones erróneas con los patrones de respuestas correctas e incorrectas, así como mejorando las preguntas, los creadores de este inventario construyeron una herramienta de diagnóstico que permite identificar concepciones erróneas específicas. Las personas que investigan pueden utilizar dicha herramienta para medir el efecto de los cambios en los métodos de enseñanza  [Hake1998]. Tew y colaboradores desarrollaron y validaron una evaluación independiente del lenguaje para programación introductoria  [Tew2011];[Park2016] la replicaron y  [Hamo2017] está desarrollando un inventario de conceptos sobre la recursividad. Sin embargo, es muy costoso construir herramientas de este tipo y su validez está cada vez más amenazada por la habilidad de los/las estudiantes para buscar respuestas en línea.

Desarrollar evaluaciones formativas en una clase solo requiere un poco de preparación y práctica. Puedes darles a tus estudiantes tarjetas coloreadas o numeradas para que respondan un COMs simultáneamente (en lugar de que tengan que levantar sus manos en turnos), incluyendo como una de las opciones “No tengo idea” y alentándoles a hablar con sus pares más cercanos por un par de segundos antes de responder. Todas estas prácticas te ayudarán a asegurar que el flujo de enseñanza no sea interrumpido. La Section 9.2 describe un método de enseñanza poderoso y basado en evidencias, construido a partir de estas simples ideas.

Humor

Los/las docentes a veces incluyen respuestas supuestamente tontas en los COMs, como “¡mi nariz!”, particularmente en los cuestionarios destinados a estudiantes jóvenes. Sin embargo, estas respuestas no proveen ninguna idea sobre las concepciones erróneas de los/las estudiantes, y la mayoría de la gente no las encuentran graciosas. Como regla, deberías solo incluir un chiste en una lección si lo encuentras gracioso la tercera vez que lo vuelves a leer.

Las evaluaciones formativas de una lección deberían preparar a los/las estudiantes para su evaluación sumativa: nadie debería encontrar nunca una pregunta en un examen para la cual la enseñanza no lo/la ha preparado. Esto no significa que nunca debes incluir nuevos tipos de problemas en un examen, pero, si lo haces, deberías de antemano haberles dado a tus estudiantes prácticas para abordar problemas nuevos. El Chapter 6 explora este punto en profundidad.

Máquina nocional

El término pensamiento computacional está muy extendido, en parte porque la gente coincide en que es importante aún cuando con el mismo término se suele referir a cosas muy distintas. En vez de discutir qué incluye y qué no incluye el término, es más útil pensar en la máquina nocional que quieres que tus estudiantes entiendan  [DuBo1986]. De acuerdo a  [Sorv2013], una máquina nocional:

  • es una abstracción idealizada del hardware de computación y de otros aspectos de los entornos del tiempo de ejecución de los programas;

  • permite describir la semántica de los programas; y

  • refleja correctamente qué hacen los programas cuando son ejecutados.

Por ejemplo, mi máquina nocional para Python es:

  1. Ejecutar programas en el momento en la memoria, la cual se divide en la pila de llamadas y el heap .

  2. La memoria para los datos siempre es asignada desde el heap.

  3. Cada conjunto de datos se almacena en una estructura de dos partes. La primera parte dice de qué tipo de datos se trata y la segunda parte es el valor real.

  4. Booleanos, números y caracteres de texto nunca son modificados una vez que se crean.

  5. Las listas, conjuntos y otras colecciones almacenan referencias a otros datos en lugar de almacenar estos valores directamente. Pueden ser modificados una vez que se crean, i.e. una lista puede ser ampliada o nuevos valores pueden ser agregados a un conjunto.

  6. Cuando un código se carga a la memoria, Python lo convierte a una secuencia de instrucciones que son almacenadas como cualquier otro tipo de dato. Este es el motivo por el que es posible asignar funciones a variables y luego pasarlas como parámetros.

  7. Cuando un código es ejecutado, Python sigue las instrucciones paso a paso, haciendo lo que cada instrucción le indica de a una por vez.

  8. Algunas instrucciones hacen que Python lea datos, haga cálculos y cree nuevos datos. Otras instrucciones controlan qué instrucciones ejecuta Python, que es el modo en que los bucles y condicionales trabajan. Otra instrucción le indica a Python que llame a una función.

  9. Cuando se llama a una función, Python coloca un nuevo marco de pila en la pila de llamadas.

  10. Cada marco de pila almacena los nombres de las variables y las referencias a los datos. Los parámetros de las funciones son simplemente otro tipo de variable.

  11. Cuando una variable es utilizada, Python la busca en el marco de la pila superior Si no la encuentra allí, busca en el último marco en la memoria global.

  12. Cuando la función finaliza, Python la borra del marco del de pila y vuelve a las instrucciones que estaba ejecutando antes de llamar a la función. Si no encuentra un “antes,” el programa ha finalizado.

Uso esta versión caricaturizada de la realidad siempre que enseño Python. Después de 25 horas de instrucción y 100 horas de trabajar en su propio tiempo, espero que la mayor parte del grupo tenga un modelo mental que incluya todas o la mayoría de estas características.

Ejercicios

Tus modelos mentales (pensar-parejas-compartir/15)

¿Cuál es el modelo mental que usas para entender tu trabajo? Escribe unas pocas oraciones para describirlo y hazle una devolución a tu pareja sobre su modelo mental. Una vez que has hecho esto en pareja, algunas pocas personas de la clase compartirán sus modelos con el grupo completo. ¿Está todo el grupo de acuerdo sobre qué es un modelo mental? ¿Es posible dar una definición precisa?, ¿o el concepto es útil justamente porque es difuso?

Síntomas de ser una persona novata (clase completa/5)

Decir que las personas novatas no tienen un modelo mental de un dominio particular no es equivalente a decir que no tienen ningún modelo mental. Las personas novatas tienden a razonar por analogía y arriesgan conjeturas: toman prestado fragmentos y partes de modelos mentales de otros dominios que superficialmente parecen similares.

La gente que hace esto generalmente dice cosas queni siquiera son falsas. Como clase, discutir qué otros síntomas hay de ser una persona novata. ¿Qué dice o hace una persona para llevarte a clasificarla como novata en algún dominio?

Modelar modelos mentales de las personas novatas (parejas/20)

Crea un cuestionario de múltiples opciones relacionado a un tópico que has enseñado o pretendas enseñar y explica el poder diagnóstico de cada uno de sus distractores (i.e. qué concepción errónea pretende ser identificada con cada distractor).

Una vez que hayas finalizado, intercambia COMs con tu pareja. ¿Son sus preguntas ambiguas? ¿Son las concepciones erróneas plausibles? ¿Los distractores realmente evalúan esas concepciones erróneas? ¿Hay otras posibles concepciones erróneas que no sean evaluadas?

Pensar en las cosas (clase completa/15)

Una buena evaluación formativa requiere que la gente piense profundamente en un problema. Por ejemplo, imagina que has colocado un bloque de hielo en un recipiente y luego llenas de agua este recìpiente, hasta el borde. Cuando el hielo se derrite, ¿el nivel de agua aumenta (de manera que el recipiente rebasa)?, ¿el nivel de agua baja?, ¿o se mantiene igual(Figure [f:models-bathtub])?

Hielo en un recipiente

La solución correcta es que el nivel de la tina permanece igual: el hielo desplaza a su propio peso en el agua, por lo que completa exactamente el “agujero" que ha dejado al derretirse. Para darse cuenta del porqué la gente construye un modelo de la relación entre el peso, el volumen y la densidad  [Epst2002].

Describe otra evaluación formativa que hayas visto o hayas utilizado, alguna que consideres que lleve a los/las estudiantes a pensar profundamente en algo, y por lo tanto ayude a identificar los defectos en sus razonamientos.

Cuando hayas finalizado, explícale tu ejemplo a a otra persona de tu grupo y dale una devolución sobre su ejemplo.

Una progresión diferente (individuos/15)

El modelo de desarrollo de habilidades de persona novata-competente-experta es a veces llamado modelo Dreyfus. Otra progresión comúnmente utilizada es el modelo de las cuatro etapas de la competencia:

Incompetencia inconsciente:

la persona no sabe lo que no sabe.

Incompetencia consciente:

la persona se da cuenta de que no sabe algo.

Competencia consciente:

la persona ha aprendido cómo hacer algo, pero solo lo puede hacer mientras mantiene su concentración y quizás aún deba dividir la tarea en varios pasos..

Competencia inconsciente:

la habilidad se ha transformado en una segunda naturaleza y la persona puede realizarla reflexivamente.

Identifica una temática en la que te encuentres en cada uno de los niveles de desarrollo de habilidades. En la materia que enseñas, ¿en qué nivel están usualmente la mayoría de tus estudiantes? ¿Qué nivel estás tratando que alcancen? ¿Cómo se relacionan estas cuatro etapas con la clasificación persona novata-competente-experta?

¿Qué tipo de computación? (individuos/10)

[Tedr2008] resume tres tradiciones en computación:

Matemática:

Los programas son la encarnación de los algoritmos. Son correctos o incorrectos, así como más o menos eficientes.

Científica:

Los programas son modelos de procesos de información más o menos adecuados que pueden ser estudiados usando el método científico.

Ingenieril:

Los programas son objetos que se construyen, tales como los diques y los aviones, y son más o menos efectivos y confiables.

¿Cuál de estas tradiciones coincide mejor con tu modelo mental de la computación? Si ninguna de ellas coincide, ¿qué modelo tienes?

Explicar por qué no (parejas/5)

Un/a estudiante de tu curso piensa que hay algún tipo de diferencia entre el texto que tipea carácter por carácter y el texto idéntico que copia y pega. Piensa en una razón por la que tu estudiante puede creer esto o en algo que pueda haber sucedido para darle esa impresión. Luego, simula ser esa persona mientras tu pareja te explica por qué no es así. Intercambia roles con tu pareja y vuelve a probar.

Tu modelo ahora (clase completa/5)

Como clase, creen una lista de elementos clave de tu modelo mental de enseñanza. ¿Cuál es la media docena de conceptos más importantes y cómo se relacionan?

Tus máquinas nocionales (grupos pequeños/20)

En grupos pequeños, escriban una descripción de la máquina nocional que quieren que sus estudiantes usen para entender cómo corren sus programas. ¿En qué difiere una máquina nocional para un lenguaje basado en bloques como Scratch de la máquina nocional para Python? ¿Y en qué difiere de una máquina nocional para hojas de cálculo o para un buscador que está interpretando HTML y CSS cuando renderiza una página web?

Disfrutar sin aprender (individuos/5)

Muchos estudios han mostrado que las evaluaciones de la enseñanza no se correlacionan con los resultados de la enseñanza  [Star2014,Uttl2017], i.e. cuán alto sea el puntaje del grupo de estudiantes en un curso no predice cuánto recuerdan. ¿Alguna vez has disfrutado de una clase en la que en realidad no has aprendido nada? Si la respuesta es sí, ¿qué hizo que disfrutes esa clase?

Revisión

Conceptos: modelos mentales
Conceptos: evaluación

Pericia y memoria

La memoria es el remanente del pensamiento.
— Daniel Willingham, Por qué a los estudiantes no les gusta la escuela(Why Students Don’t Like School)

El capítulo anterior explicaba las diferencias entre personas novatas y practicantes competentes. En éste se observa la pericia: qué es, cómo se puede adquirir, y cómo puede ser perjudicial o también de ayuda. Luego introduciremos uno de los límites más importantes en el aprendizaje y miraremos cómo crear dibujos de modelos mentales puede ayudarnos a convertir el conocimiento en lecciones.

Para empezar, ¿a qué nos referimos cuando decimos que alguien es una persona experta? La respuesta habitual es que puede resolver problemas mucho más rápido que la persona que es “simplemente competente”, o que puede reconocer y entender casos donde las reglas normales no se pueden aplicar. Es más, de alguna manera hace que parezca que no requiere esfuerzo alguno: en muchos casos, parece saber la respuesta correcta de un vistazo [Parn2017].

Pericia es más que solo conocer más hechos: los practicantes competentes pueden memorizar una gran cantidad de trivialidades sin mejorar notablemente sus desempeños. En cambio, imagina por un momento que almacenamos conocimiento como una red o grafo en la cual los hechos son nodos y las relaciones son arcos33. La diferencia clave entre personas expertas y practicantes competentes es que los modelos mentales de las personas expertas están mucho más densamente conectados, p.ej. es más probable que conozcan una conexión entre dos hechos cualesquiera.

La metáfora del grafo explica por qué ayudar a los estudiantes a hacer conexiones es tan importante como presentarles los hechos: sin esas conexiones, la gente no puede recordar y usar aquello que sabe. También explica varios aspectos observados del comportamiento experto:

  • Las personas expertas pueden saltar directamente de un problema a una solución porque realmente existe una conexión directa entre ambos en sus mentes. Mientras un practicante competente debería razonar A → B → C → D → E, una persona experta puede ir de A a E en un solo paso. Esto lo llamamos intuición: en vez de razonar su camino a una solución, la persona experta reconoce una solución de la misma manera que reconocería una cara familiar.

  • Los grafos densamente conectados son también la base para la representación fluida de las personas expertas, p.ej. sus habilidades para cambiar de una a otra entre distintas vistas de un problema [Petr2016]. Por ejemplo, tratando de resolver un problema en matemáticas, una persona experta puede cambiar entre abordarlo de manera geométrica y representarlo como un conjunto de ecuaciones.

  • Esta metáfora también explica por qué las personas expertas son mejores en diagnósticos que los practicantes competentes: mayor cantidad de conexiones entre hechos hace más fácil razonar hacia atrás, de síntomas a causas. (Esta es la razón del por qué es preferible pedirle a programadores depurar un programa durante una entrevista de trabajo a pedirles que programen: da una impresión más precisa de su habilidad)

  • Finalmente, las personas expertas están muchas veces tan familiarizadas con su tema que no pueden imaginarse cómo puede ser no ver el mundo de esa manera. Esto significa que muchas veces están menos capacitadas para enseñar un tema que personas con menor experiencia, que aún recuerda cómo lo ha aprendido.

El último de estos puntos se llama punto ciego de las personas expertas. Como se definió originalmente en [Nath2003], es la tendencia de las personas expertas a organizar una explicación de acuerdo a los principios principales del tema en lugar de guiarse por aquello que los aprendices ya conocen. Se puede superar con entrenamiento, pero es parte de la razón por la que no hay correlación entre lo bueno que es alguien para investigar en un área y lo bueno que es para enseñarlo [Mars2002].

La letra S

Las personas expertas a menudo caen en sus puntos ciegos usando la palabra “solo,” como en, “Oh, es fácil, solo enciendes una nueva máquina virtual y luego solo instalas estos cuatro parches a Ubuntu y luego solo reescribes todo tu programa en un lenguaje funcional puro.” Como discutimos en Chapter 10, hacer ésto indica que quien habla piensa que el problema es trivial y por lo tanto la persona que lucha con esto debe ser estúpida, entonces no lo hagas.

Mapas conceptuales

La herramienta que elegimos para representar el modelo mental de alguien es un mapa conceptual, en el cual los hechos son burbujas y las conexiones son relaciones etiquetadas. Como ejemplos, Figure [f:memory-seasons] muestra por qué la Tierra tiene estaciones (de IHMC), y Appendix 22 presenta mapas conceptuales de librerías desde tres puntos de vista distintos.

Mapa conceptual para Estaciones

Para mostrar cómo pueden ser usados los mapas conceptuales para enseñar programación, considere este for bucle en Python:

for letter in "abc":
    print(letter)

cuya salida es:

a
b
c

Los tres “cosas” clave en este bucle se muestra al principio de Figure [f:memory-loop], pero son solo la mitad de la historia. La versión ampliada en la parte inferior muestra las relaciones entre esas cosas, las cuales son tan importantes para la comprensión como los conceptos en sí mismos.

Mapa conceptual para un for loop

Los mapas conceptuales pueden ser usados de varias maneras:

Ayudando a docentes para descubrir qué están tratando de enseñar.

Un mapa conceptual separa el contenido del orden: en nuestra experiencia, las personas rara vez terminan enseñando las cosas en el orden que las dibujaron por primera vez.

Ayudando a la comunicación entre diseñadores de lecciones

Los docentes con ideas muy diferentes de aquello que están tratando de enseñar es probable que arrastren a sus estudiantes en diferentes direcciones. Dibujar y compartir mapas conceptuales puede ayudar a prevenirlo. Y sí, personas diferentes pueden tener mapas conceptuales diferentes para el mismo tema, pero el mapeo conceptual hace explícitas estas diferencias.

Ayudando a la comunicación con estudiantes.

Si bien es posible dar a los estudiantes un mapa pre-dibujado al inicio de la lección para que puedan anotar, es mejor dibujarlo parte por parte mientras se está enseñando, para reforzar la relación entre lo que muestra el mapa y lo que dice el docente. Volveremos a esta idea en Section 4.1.

Para evaluación.

Hacer que los estudiantes dibujen lo que creen que acaban de aprender muestra al enseñante lo que se perdieron y lo que se comunicó mal. Revisar los mapas conceptuales de estudiantes insume demasiado tiempo para utilizarlo como una evaluación formativa durante las clases, pero es muy útil en clases semanales una vez que el estudiantado está familiarizado con la técnica. La calificación es necesaria porque cualquier manera nueva de hacer algo inicialmente ralentiza a la gente—si un estudiante está tratando de encontrarle el sentido a la programación básica, pedirle que se imagine cómo esquematizar sus pensamientos al mismo tiempo, es una carga no conveniente.

Algunos enseñantes son escépticos a que las personas novatas puedan mapear efectivamente lo que entendieron, dado que la introspección y la explicación de lo entendido son generalmente habilidades más avanzadas que la comprensión misma. Por ejemplo,[Kepp2008] observó el uso del mapeo conceptual en la enseñanza de computación. Uno de los hallazgos fue que, “ el mapeo conceptual es problemático para muchos estudiantes porque evalúa la comprensión personal en lugar del conocimiento que simplemente se aprendió de memoria.” Yo, como alguien que valora la comprensión sobre el conocimiento de memoria, lo considero un beneficio.

Comienza por cualquier lugar

Cuando se pide por primera vez dibujar un mapa conceptual , muchas personas no saben por dónde empezar. Cuando esto ocurre, escribe dos palabras asociadas con el tema que estás tratando de mapear, luego dibuja una linea entre ellas y agrega una etiqueta explicando cómo estas dos ideas están relacionadas. Puedes entonces preguntar qué otras cosas están relacionadas en el mismo sentido, qué partes tienen esas cosas, o qué sucede antes o después con los conceptos que ya están en la hoja a fin de descubrir más nodos y arcos. Después de eso, casi siempre la parte más difícil está terminanda.

Los mapas conceptuales son solo una forma de representar nuestro conocimiento de un tema [Eppl2006]; otros incluyen diagramas de Venn, diagramas de flujo, y árboles de decisión [Abel2009]. Todos ellos externalizaron la comprensión, p.ej. hacen visibles los modelos mentales de manera que pueden ser comparados y combinados40.

Trabajo crudo y honestidad

Muchos diseñadores de interfaces de usuario creen que es mejor mostrar a la gente bocetos de sus ideas en lugar de maquetas pulidas porque estiman que las personas dan una opinión más honesta sobre algo que consideran solo ha requirido unos pocos minutos crear: si parece que lo que están criticando tardó horas en hacerse, la mayoría lanzará sus golpes de puño. Al dibujar mapas de concepto para motivar un intercambio de ideas, deberías entonces usar lápices y papel borrador (o bolígrafos y una pizarra) en lugar de sofisticadas herramientas de dibujo por computadora.

Siete más o menos dos

Mientras el modelo gráfico de conocimiento es incorrecto pero útil, otro modelo simple tiene bases fisiológicas profundas. Como una aproximación rápida, la memoria humana se puede dividir en dos capas distintas. La primera, llamada long-term or memoria persistente, es donde almacenamos cosas como los nombres de nuestros amigos, nuestra dirección, y lo que hizo el payaso en nuestra fiesta de cumpleaños de 8 que nos asustó mucho. Su capacidad es esencialmente ilimitada, pero es de acceso lento—demasiado lenta para ayudarnos a lidiar con leones hambrientos y familiares descontentos.

La evolución entonces nos ha dado un segundo sistema llamado short-term or memoria de trabajo. Es mucho más rápida, pero también más pequeña: [Mill1956] estimó que la memoria de trabajo del adulto promedio sólo podía contener 7 ± 2 elementos a la vez. Esta es la razón por la cual los números de teléfono son de 7 u 8 dígitos de longitud: antes cuando los teléfonos tenían dial en vez de teclado, esa era la cadena de números más larga que la mayoría de los adultos podía recordar con precisión durante el tiempo que tardaba el dial en girar varias veces.

Participación

El tamaño de la memoria de trabajo a veces se usa para explicar por qué los equipos deportivos tienden a formarse con aproximadamente media docena de miembros o se separan en sub-grupos como los delanteros y tres cuartos de rugby. También se usa para explicar por qué las reuniones sólo son productivas hasta un cierto número de participantes: si veinte personas tratan de discutir algo, o bien se arman tres reuniones al mismo tiempo o media docena de personas hablan mientras los demás escuchan. El argumento es que la habilidad de las personas para llevar registro de sus pares está limitada al tamaño de la memoria de trabajo, pero hasta donde sé, la relación jamás fue probada.

7±2 es simplemente el número más importante al enseñar. Un docente no puede colocar información directamente en la memoria a largo plazo de un estudiante. En cambio, cualquier cosa que presente se almacena primero en la memoria a corto plazo del estudiante, y sólo se transfiere a la memoria a largo plazo después que ha sido mantenida ahí y ensayada (Section 5.1). Si el docente presenta demasiada información y muy rápidamente, esa nueva información desplaza la vieja antes que esta última se transfiera.

Esta es una de las razones de usar mapas conceptuales cuando se diseña una lección: sirve para asegurarse que la memoria a corto plazo de los estudiantes no estará sobrecargada. una vez que se dibuja el mapa, el docente eligirá un fragmento que se ajuste para la memoria a corto plazo y continuara con una evaluación formativa (Figure [f:memory-photosynthesis]), luego agregará otro fragmento para la próxima lección y así sucesivamente.

Usando mapas conceptuales en el diseño de la lección

Construyendo mapas conceptuales en comunidad

La próxima vez que tengas una reunión de equipo, entrega a todos una hoja de papel y que pasen unos minutos dibujando sus propios mapas conceptuales del proyecto en el que están trabajando. A la cuenta de tres, haz que todos revelen sus mapas conceptuales a su grupo. La discusión que sigue puede ayudar a las personas a comprender por qué se han estado tropezando.

Ten en cuenta que el modelo simple de memoria presentado aquí ha sido reemplazado en gran medida por uno más sofisticado en el que la memoria a corto plazo se divide en varios almacenamientos (p. ej. para memoria visual versus linguistica), cada uno de los cuales realiza un preprocesamiento involuntario [Mill2016a]. Nuestra presentación es entonces un ejemplo de un modelo mental que ayuda al aprendizaje y al trabajo diario.

Reconocimiento de patrones

Investigaciones recientes sugieren que el tamaño real de la memoria a corto plazo podría ser tan bajo como 4±1 elementos [Dida2016]. Para manejar conjuntos de información más grandes, nuestras mentes crean fragmentos. Por ejemplo, la mayoría de nosotros recordamos palabras como elementos simples más que como secuencia de letras. Del mismo modo, el patrón formado por cinco puntos en cartas o dados se recuerda como un todo en lugar de cinco piezas de información separadas.

Las personas expertas tienen más fragmentos y de mayor tamaño que las no-expertas, p.ej.`ven” patrones más grandes y tienen más patrones para contrastar cosas. Esto les permite razonar a un nivel superior y para buscar información de manera más rápida y precisa. Sin embargo, la fragmentación también puede engañarnos si identificamos mal las cosas: quienes recién llegan a veces pueden ver cosas que personas expertas han visto y perdido.

Dada la importancia de la fragmentación para pensar, es tentador identificar design patterns y enseñarlos directamente. Estos patrones ayudan a los practicantes competentes a pensar y dialogar en varios dominios (incluída la enseñanza [Berg2012]), pero los catálogos de patrones son demasiado duros y abstractos para personas novatas para que ellas mismas les encuentren sentido. Dicho esto, dar nombres a un pequeño número de patrones parece ayudar con la enseñanza, principalmente dando a los alumnos un vocabulario más rico para pensar y comunicarse.[Kuit2004,Byck2005,Saja2006] Volveremos a este tema en Section [s:pck-programación].

Convirtiéndose en persona experta

Entonces, ¿cómo se convierte alguien en una persona experta? La idea de que diez mil horas de práctica lo harán es ampliamente citada pero probablemente no sea verdad: hacer lo mismo una y otra vez es más probable que fortalezca los malos hábitos a que mejore la actuación. Lo que realmente funciona es hacer cosas similares pero sutilmente diferentes, poniendo atención en qué funciona y qué no, y luego cambiar el comportamiento como respuesta a las devoluciones para mejorar de forma acumulativa. Esto se llama deliberate or práctica reflectiva, y una progresión común es que las personas pasen por tres etapas:

Actuar según las devoluciones de otros.

Los estudiantes pueden escribir un ensayo sobre qué hicieron en sus vacaciones de verano y recibir devoluciones de un enseñante que les diga cómo mejorarlo.

Dar devoluciones sobre el trabajo de otros.

Los estudiantes pueden realizar críticas de la evolución de un personaje en una novela de Harry Potter y recibir una devolución de un enseñante sobre esas críticas.

Darse devoluciones a sí mismos.

En algún punto, los estudiantes empiezan a criticar sus propios trabajos como lo hacían usando las habilidades que ahora han construído. Hacer esto es mucho más rápido que esperar los comentarios de otros esta competencia de pronto empieza a despegar.

¿Qué cuenta como práctica deliberada?

[Macn2014] descubrió que, “la práctica deliberada explicaba el 26% de la varianza en el rendimiento de los juegos, 21% para música, 18% para deportes, 4% para educación, y menos del 1% para profesiones.” Sin embargo, [Eric2016] criticó este hallazgo diciendo, “Resumir cada hora de cualquier tipo de práctica durante la carrera de un individuo implica que el impacto de todos los tipos de actividad práctica respecto a rendimiento es igual ——una suposición quees inconsistente con la evidencia.” Para ser efectivo, la práctica deliberada requiere tanto un objetivo de rendimiento claro como una devolución informativa inmediata, ambas son cosas que los enseñantes, de cualquier manera, deberían esforzarse en conseguir.

Ejercicios

Mapear Conceptos (de a pares/30)

Dibuja un mapa conceptual sobre algo que puedas enseñar en cinco minutos. Discutan con tu colega y critiquen el mapa de cada uno. ¿Presentan conceptos o detalles de superficie? ¿Cuáles de las relaciones en el mapa de tu colega consideras conceptos y viceversa?

Mapeo de conceptos (Nuevamente) (grupos pequeños/20)

Trabajar en grupos de 3–4, cada persona debe dibujar independientemente del resto un mapa conceptual mostrando su modelo mental de qué sucede en un aula. cuando todos hayan terminado, comparen los mapas conceptuales. ¿Dónde coinciden y difieren sus modelos mentales?

Mejora de la memoria a corto plazo (individual/5 minutos)

[Cher2007] sugiere que La razón principal por la que las personas dibujan diagramas cuando discuten cosas es para ampliar su memoria a corto plazo: Señalar una burbuja dibujada hace unos minutos provoca el recuerdo de varios minutos de debate. Cuando intercambiaste mapas conceptuales en el ejercicio anterior, ¿Qué tan fácil fue para otras personas entender lo que significaba tu mapa? ¿Qué tan fácil sería para ti si lo dejas de lado por un día o dos y luego lo miras de nuevo?

Eso es un poco autorreferencial, ¿no? (toda la clase/30)

Trabajando independientemente, dibuja un mapa conceptual para mapas conceptuales. Compara tu mapa conceptual con los dibujados por los demás. ¿Qué incluyeron la mayoría de las personas? ¿Cuáles fueron las diferencias más significativas?

Notar tus puntos ciegos (grupos pequeños/10)

Elizabeth Wickes listó todo aquello que necesitas para entender Para leer esta línea de Python:

answers = ['tuatara', 'tuataras', 'bus', "lick"]
  • Los corchetes rodeando el contenido, significa que estamos trabajando con una lista (lo opuesto a corchetes inmediatamente a la derecha de algo, que es la notación utilizada para una extracción de datos).

  • Los elementos se separan por comas fuera y entre comillas (en vez de adentro, como sería para un texto citado).

  • Cada elemento es una cadena de caracteres, y lo sabemos por las comillas. Aquí podríamos tener números u otro tipo de datos si quisiéramos; necesitamos comillas porque estamos trabajando con cadenas.

  • Estamos mezclando el uso de comilla simples y dobles; A Python no le importa eso siempre que estén balanceadas alrededor de las cadenas individuales (para cada comilla que abre haya una que cierre).

  • A cada coma le sigue un espacio, que no es obligatorio para Python, pero que preferimos para una lectura más clara.

Cada uno de estos detalles un experto ni los vería. Trabajando en grupos de 3–4 personas, Selecciona algo igualmente corto de una lección que hayas enseñado o aprendido y divídelo a este nivel de detalle.

Qué enseñar a continuación (individual/5)

Vuelve al mapa conceptual para la fotosíntesis en Figure [f:memory-photosynthesis]. Cuántos conceptos y relaciones hay en los fragmentos seleccionados? ¿Qué incluirías en el próximo fragmento de la lección y por qué?

El poder de fragmentación (individual/5)

Mira Figure [f:memory-unchunked] por 10 segundos, luego mira hacia otro lado e intenta escribir tu número de teléfono con estos símbolos49. (Usa un espacio para ’0’.) Cuando hayas terminado, Mira la representación alternativa en Appendix 23. ¿Cuánto más fáciles de recordar son los símbolos cuando el patrón se hace explícito?

Unchunked representation

Cognitive Architecture

We have been talking about mental models as if they were real things, but what actually goes on in a learner’s brain when they’re learning? The short answer is that we don’t know; the longer answer is that we know a lot more than we used to. This chapter will dig a little deeper into what brains do while they’re learning and how we can leverage that to design and deliver lessons more effectively.

What’s Going On In There?

Cognitive architecture

Figure [f:arch-model] is a simplified model of human cognitive architecture. The core of this model is the separation between short-term and long-term memory discussed in Section 3.2. Long-term memory is like your basement: it stores things more or less permanently, but you can’t access its contents directly. Instead, you rely on your short-term memory, which is the cluttered kitchen table of your mind.

When you need something, your brain retrieves it from long-term memory and puts it in short-term memory. Conversely, new information that arrives in short-term memory has to be encoded to be stored in long-term memory. If that information isn’t encoded and stored, it’s not remembered and learning hasn’t taken place.

Information gets into short-term memory primarily through your verbal channel (for speech) and visual channel (for images)50. Most people rely primarily on their visual channel, but when images and words complement each other, the brain does a better job of remembering them both: they are encoded together, so recall of one later on helps trigger recall of the other.

Linguistic and visual input are processed by different parts of the human brain, and linguistic and visual memories are stored separately as well. This means that correlating linguistic and visual streams of information takes cognitive effort: when someone reads something while hearing it spoken aloud, their brain can’t help but check that it’s getting the same information on both channels.

Learning is therefore increased when information is presented simultaneously in two different channels, but is reduced when that information is redundant rather than complementary, a phenomenon called the split-attention effect [Maye2003]. For example, people generally find it harder to learn from a video that has both narration and on-screen captions than from one that has either the narration or the captions but not both, because some of their attention has to be devoted to checking that the narration and the captions agree with each other. Two notable exceptions to this are people who do not yet speak the language well and people with hearing impairments or other special needs, both of whom may find that the value of the redundant information outweighs the extra processing effort.

Piece by Piece

The split attention effect explains why it’s more effective to draw a diagram piece by piece while teaching than to present the whole thing at once. If parts of the diagram appear at the same time as things are being said, the two will be correlated in the learner’s memory. Pointing at part of the diagram later is then more likely to trigger recall of what was being said when that part was being drawn.

The split-attention effect does not mean that learners shouldn’t try to reconcile multiple incoming streams of information—after all, this is what they have to do in the real world [Atki2000]. Instead, it means that instruction shouldn’t require people to do it while they are first mastering unit skills; instead, using multiple sources of information simultaneously should be treated as a separate learning task.

Not All Graphics are Created Equal

[Sung2012] presents an elegant study that distinguishes seductive graphics (which are highly interesting but not directly relevant to the instructional goal), decorative graphics (which are neutral but not directly relevant to the instructional goal), and instructive graphics (which are directly relevant to the instructional goal). Learners who received any kind of graphic gave material higher satisfaction ratings than those who didn’t get graphics, but only learners who got instructive graphics actually performed better.

Similarly, [Stam2013,Stam2014] found that having more information can actually lower performance. They showed children pictures, pictures and numbers, or just numbers for two tasks. For some, having pictures or pictures and numbers outperformed having numbers only, but for others, having pictures outperformed pictures and numbers, which outperformed just having numbers.

Cognitive Load

In [Kirs2006], Kirschner, Sweller and Clark wrote:

Although unguided or minimally guided instructional approaches are very popular and intuitively appealingthese approaches ignore both the structures that constitute human cognitive architecture and evidence from empirical studies over the past half-century that consistently indicate that minimally guided instruction is less effective and less efficient than instructional approaches that place a strong emphasis on guidance of the student learning process. The advantage of guidance begins to recede only when learners have sufficiently high prior knowledge to provide “internal” guidance.

Beneath the jargon, the authors were claiming that having learners ask their own questions, set their own goals, and find their own path through a subject is less effective than showing them how to do things step by step. The “choose your own adventure” approach is known as inquiry-based learning, and is intuitively appealing: after all, who would argue against having learners use their own initiative to solve real-world problems in realistic ways? However, asking learners to do this in a new domain overloads them by requiring them to master a domain’s factual content and its problem-solving strategies at the same time.

More specifically, cognitive load theory proposed that people have to deal with three things when they’re learning:

Intrinsic load

is what people have to keep in mind in order to absorb new material.

Germane Load

is the (desirable) mental effort required to link new information to old, which is one of the things that distinguishes learning from memorization.

Extraneous Load

is anything that distracts from learning.

Cognitive load theory holds that people have to divide a fixed amount of working memory between these three things. Our goal as teachers is to maximize the memory available to handle intrinsic load, which means reducing the germane load at each step and eliminating the extraneous load.

Parsons Problems

One kind of exercise that can be explained in terms of cognitive load is often used when teaching languages. Suppose you ask someone to translate the sentence, “How is her knee today?” into Frisian. To solve the problem, they need to recall both vocabulary and grammar, which is a double cognitive load. If you ask them to put “hoe,” “har,” “is,” “hjoed,” and “knie” in the right order, on the other hand, you are allowing them to focus solely on learning grammar. If you write these words in five different fonts or colors, though, you have increased the extraneous cognitive load, because they will involuntarily (and possibly unconsciously) expend some effort trying to figure out if the differences are meaningful (Figure [f:architecture-frisian]).

Constructing a sentence
Constructing a sentence

The coding equivalent of this is called a Parsons Problem56 [Pars2006]. When teaching people to program, you can give them the lines of code they need to solve a problem and ask them to put them in the right order. This allows them to concentrate on control flow and data dependencies without being distracted by variable naming or trying to remember what functions to call. Multiple studies have shown that Parsons Problems take less time for learners to do but produce equivalent educational outcomes [Eric2017].

Faded Examples

Another type of exercise that can be explained in terms of cognitive load is to give learners a series of faded examples. The first example in a series presents a complete use of a particular problem-solving strategy. The next problem is of the same type, but has some gaps for the learner to fill in. Each successive problem gives the learner less scaffolding, until they are asked to solve a complete problem from scratch. When teaching high school algebra, for example, we might start with this:

(4x + 8)/2 = 5
4x + 8 = 2 * 5
4x + 8 = 10
4x = 10 - 8
4x = 2
x = 2 / 4
x = 1 / 2

and then ask learners to solve this:

(3x - 1)*3 = 12
3x - 1 = _ / _
3x - 1 = 4
3x = _
x = _ / 3
x = _

and this:

(5x + 1)*3 = 4
5x + 1 = _
5x = _
x = _

and finally this:

(2x + 8)/4 = 1
x = _

A similar exercise for teaching Python might start by showing learners how to find the total length of a list of words:

# total_length(["red", "green", "blue"]) => 12
define total_length(list_of_words):
    total = 0
    for word in list_of_words:
        total = total + length(word)
    return total

and then ask them to fill in the blanks in this (which focuses their attention on control structures):

# word_lengths(["red", "green", "blue"]) => [3, 5, 4]
define word_lengths(list_of_words):
    list_of_lengths = []
    for ____ in ____:
        append(list_of_lengths, ____)
    return list_of_lengths

The next problem might be this (which focuses their attention on updating the final result):

# join_all(["red", "green", "blue"]) => "redgreenblue"
define join_all(list_of_words):
    joined_words = ____
    for ____ in ____:
        ____
    return joined_words

Learners would finally be asked to write an entire function on their own:

# make_acronym(["red", "green", "blue"]) => "RGB"
define make_acronym(list_of_words):
    ____

Faded examples work because they introduce the problem-solving strategy piece by piece: at each step, learners have one new problem to tackle, which is less intimidating than a blank screen or a blank sheet of paper (Section 9.11). It also encourages learners to think about the similarities and differences between various approaches, which helps create the linkages in their mental models that help retrieval.

The key to constructing a good faded example is to think about the problem-solving strategy it is meant to teach. For example, the programming problems above all use the accumulator design pattern, in which the results of processing items from a collection are repeatedly added to a single variable in some way to create the final result.

Cognitive Apprenticeship

An alternative model of learning and instruction that also uses scaffolding and fading is cognitive apprenticeship, which emphasizes the way in which a master passes on skills and insights to an apprentice. The master provides models of performance and outcomes, then coaches novices by explaining what they are doing and why [Coll1991,Casp2007]. The apprentice reflects on their own problem solving, e.g. by thinking aloud or critiquing their own work, and eventually explores problems of their own choosing.

This model tells us that teachers should present several examples when presenting a new idea so that learners can see what to generalize, and that we should vary the form of the problem to make it clear what are and aren’t superficial features60. Problems should be presented in real-world contexts, and we should encourage self-explanation to help learners organize and make sense of what they have just been taught (Section 5.1).

Labeled Subgoals

Labeling subgoals means giving names to the steps in a step-by-step description of a problem-solving process.[Marg2016,Morr2016] found that learners with labeled subgoals solved Parsons Problems better than learners without, and the same benefit is seen in other domains [Marg2012]. Returning to the Python example used earlier, the subgoals in finding the total length of a list of words or constructing an acronym are:

  1. Create an empty value of the type to be returned.

  2. Get the value to be added to the result from the loop variable.

  3. Update the result with that value.

Labeling subgoals works because grouping related steps into named chunks (Section 3.2) helps learners distinguish what’s generic from what is specific to the problem at hand. It also helps them build a mental model of that kind of problem so that they can solve other problems of that kind, and gives them a natural opportunity for self-explanation (Section 5.1).

Minimal Manuals

The purest application of cognitive load theory may be John Carroll’s minimal manual [Carr1987,Carr2014]. Its starting point is a quote from a user: “I want to do something, not learn how to do everything.” Carroll and colleagues redesigned training to present every idea as a single-page self-contained task: a title describing what the page was about, step-by-step instructions of how to do just one thing (e.g. how to delete a blank line in a text editor), and then several notes on how to recognize and debug common problems. They found that rewriting training materials this way made them shorter overall, and that people using them learned faster. Later studies confirmed that this approach outperformed the traditional approach regardless of prior experience with computers [Lazo1993].[Carr2014] summarized this work by saying:

Our “minimalist” designs sought to leverage user initiative and prior knowledge, instead of controlling it through warnings and ordered steps. It emphasized that users typically bring much expertise and insight to this learning, for example, knowledge about the task domain, and that such knowledge could be a resource to instructional designers. Minimalism leveraged episodes of error recognition, diagnosis, and recovery, instead of attempting to merely forestall error. It framed troubleshooting and recovery as learning opportunities instead of as aberrations.

Other Models of Learning

Critics of cognitive load theory have sometimes argued that any result can be justified after the fact by labeling things that hurt performance as extraneous load and things that don’t as intrinsic or germane. However, instruction based on cognitive load theory is undeniably effective. For example,[Maso2016] redesigned a database course to remove split attention and redundancy effects and to provide worked examples and sub-goals. The new course reduced the exam failure rate by 34% and increased learner satisfaction.

A decade after the publication of [Kirs2006], a growing number of people believe that cognitive load theory and inquiry-based approaches are compatible if viewed in the right way.[Kaly2015] argues that cognitive load theory is basically micro-management of learning within a broader context that considers things like motivation, while [Kirs2018] extends cognitive load theory to include collaborative aspects of learning. As with [Mark2018] (discussed in Section 5.1), researchers’ perspectives may differ, but the practical implementation of their theories often wind up being the same.

One of the challenges in educational research is that what we mean by “learning” turns out to be complicated once you look beyond the standardized Western classroom. Two specific perspectives from educational psychology have influenced this book. The one we have used so far is cognitivism, which focuses on things like pattern recognition, memory formation, and recall. It is good at answering low-level questions, but generally ignores larger issues like, “What do we mean by ‘learning’?” and, “Who gets to decide?” The other is situated learning, which focuses on bringing people into a community and recognizes that teaching and learning are always rooted in who we are and who we aspire to be. We will discuss it in more detail in Chapter 13.

The Learning Theories website and [Wibu2016] have good summaries of these and other perspectives. Besides cognitivism, those encountered most frequently include behaviorism (which treats education as stimulus/response conditioning), constructivism (which considers learning an active process during which learners construct knowledge for themselves), and connectivism (which holds that knowledge is distributed, that learning is the process of navigating, growing, and pruning connections, and which emphasizes the social aspects of learning made possible by the internet). These perspectives can help us organize our thoughts, but in practice, we always have to try new methods in the class, with actual learners, in order to find out how well they balance the many forces in play.

Exercises

Create a Faded Example (pairs/30)

It’s very common for programs to count how many things fall into different categories: for example, how many times different colors appear in an image, or how many times different words appear in a paragraph of text.

  1. Create a short example (no more than 10 lines of code) that shows people how to do this, and then create a second example that solves a similar problem in a similar way but has a couple of blanks for learners to fill in. How did you decide what to fade out? What would the next example in the series be?

  2. Define the audience for your examples. For example, are these beginners who only know some basics programming concepts? Or are these learners with some experience in programming?

  3. Show your example to a partner, but do not tell them what level you think it is for. Once they have filled in the blanks, ask them to guess the intended level.

If there are people among the trainees who don’t program at all, try to place them in different groups and have them play the part of learners for those groups. Alternatively, choose a different problem domain and develop a faded example for it.

Classifying Load (small groups/15)

  1. Choose a short lesson that a member of your group has taught or taken recently.

  2. Make a point-form list of the ideas, instructions, and explanations it contains.

  3. Classify each as intrinsic, germane, or extraneous. What did you all agree on? Where did you disagree and why?

(The exercise “Noticing Your Blind Spot” in Section 3.4 will give you an idea of how detailed your point-form list should be.)

Create a Parsons Problem (pairs/20)

Write five or six lines of code that does something useful, jumble them, and ask your partner to put them in order. If you are using an indentation-based language like Python, do not indent any of the lines; if you are using a curly-brace language like Java, do not include any of the curly braces. (If your group includes people who aren’t programmers, use a different problem domain, such as making banana bread.)

Minimal Manuals (individual/20)

Write a one-page guide to doing something that your learners might encounter in one of your classes, such as centering text horizontally or printing a number with a certain number of digits after the decimal point. Try to list at least three or four incorrect behaviors or outcomes the learner might see and include a one- or two-line explanation of why each happens and how to correct it.

Cognitive Apprenticeship (pairs/15)

Pick a coding problem that you can do in two or three minutes and think aloud as you work through it while your partner asks questions about what you’re doing and why. Do not just explain what you’re doing, but also why you’re doing it, how you know it’s the right thing to do, and what alternatives you’ve considered but discarded. When you are done, swap roles with your partner and repeat the exercise.

Worked Examples (pairs/15)

Seeing worked examples helps people learn to program faster than just writing lots of code [Skud2014], and deconstructing code by tracing it or debugging it also increases learning [Grif2016]. Working in pairs, go through a 10–15 line piece of code and explain what every statement does and why it is necessary. How long does it take? How many things do you feel you need to explain per line of code?

Critiquing Graphics (individual/30)

[Maye2009,Mill2016a] presents six principles for good teaching graphics:

Signalling:

visually highlight the most important points so that they stand out from less-critical material.

Spatial contiguity:

place captions as close to the graphics as practical to offset the cost of shifting between the two.

Temporal contiguity:

present spoken narration and graphics as close in time as practical. (Presenting both at once is better than presenting them one after another.)

Segmenting:

when presenting a long sequence of material or when learners are inexperienced with the subject, break the presentation into short segments and let learners control how quickly they advance from to the next.

Pre-training:

if learners don’t know the major concepts and terminology used in your presentation, teach just those concepts and terms beforehand.

Modality:

people learn better from pictures plus narration than from pictures plus text, unless they are non-native speakers or there are technical words or symbols.

Choose a video of a lesson or talk online that uses slides or other static presentations and rate its graphics as “poor,” “average,” or “good” according to these six criteria.

Review

Concepts: Cognitive load

Individual Learning

Previous chapters have explored what teachers can do to help learners. This chapter looks at what learners can do for themselves by changing their study strategies and getting enough rest.

The most effective strategy is to switch from passive learning to active learning [Hpl2018], which significantly improves performance and reduces failure rates [Free2014]:

Passive Active
Read about something Do exercises
Watch a video Discuss a topic
Attend a lecture Try to explain it

Referring back to our simplified model of cognitive architecture (Figure [f:arch-model]), active learning is more effective because it keeps new information in short-term memory longer, which increases the odds that it will be encoded successfully and stored in long-term memory. And by using new information as it arrives, learners build or strengthen ties between that information and what they already know, which in turn increases the chances that they will be able to retrieve it later.

The other key to getting more out of learning is metacognition, or thinking about one’s own thinking. Just as good musicians listen to their own playing and good teachers reflect on their teaching (Chapter 8), learners will learn better and faster if they make plans, set goals, and monitor their progress. It’s difficult for learners to master these skills in the abstract—just telling them to make plans doesn’t have any effect—but lessons can be designed to encourage good study practices, and drawing attention to these practices in class helps learners realize that learning is a skill they can improve like any other [McGu2015,Miya2018].

The big prize is transfer of learning, which occurs when one thing we have learned helps us learn other things more quickly. Researchers distinguish between near transfer, which occurs between similar or related areas like fractions and decimals in mathematics, and far transfer, which occurs between dissimilar domains—for example, the idea that learning to play chess will help mathematical reasoning or vice versa.

Near transfer undoubtedly occurs—no kind of learning beyond simple memorization could occur if it didn’t—and teachers leverage it all the time by giving learners exercises that are similar to material that has just been presented in a lesson. However,[Sala2017] analyzed many studies of far transfer and concluded that:

the results show small to moderate effects. However, the effect sizes are inversely related to the quality of the experimental design We conclude that far transfer of learning rarely occurs.

When far transfer does occur, it seems to happen only once a subject has been mastered [Gick1987]. In practice, this means that learning to program won’t help you play chess and vice versa.

Six Strategies

Psychologists study learning in a wide variety of ways, but have reached similar conclusions about what actually works [Mark2018]. The Learning Scientists have catalogued six of these strategies and summarized them in a set of downloadable posters. Teaching these strategies to learners, and mentioning them by name when you use them in class, can help them learn how to learn faster and better [Wein2018a,Wein2018b].

Spaced Practice

Ten hours of study spread out over five days is more effective than two five-hour days, and far better than one ten-hour day. You should therefore create a study schedule that spreads study activities over time: block off at least half an hour to study each topic each day rather than trying to cram everything in the night before an exam [Kang2016].

You should also review material after each class, but not immediately after—take at least a half-hour break. When reviewing, be sure to include at least a little bit of older material: for example, spend twenty minutes looking over notes from that day’s class and then five minutes each looking over material from the previous day and from a week before. Doing this also helps you catch any gaps or mistakes in previous sets of notes while there’s still time to correct them or ask questions: it’s painful to realize the night before the exam that you have no idea why you underlined “Demodulate!!” three times.

When reviewing, make notes about things that you had forgotten: for example, make a flash card for each fact that you couldn’t remember or that you remembered incorrectly [Matt2019]. This will help you focus the next round of study on things that most need attention.

The Value of Lectures

According to [Mill2016a], “The lectures that predominate in face-to-face courses are relatively ineffective ways to teach, but they probably contribute to spacing material over time, because they unfold in a set schedule over time. In contrast, depending on how the courses are set up, online students can sometimes avoid exposure to material altogether until an assignment is nigh.”

Retrieval Practice

The limiting factor for long-term memory is not retention (what is stored) but recall (what can be accessed). Recall of specific information improves with practice, so outcomes in real situations can be improved by taking practice tests or summarizing the details of a topic from memory and then checking what was and wasn’t remembered. For example,[Karp2008] found that repeated testing improved recall of word lists from 35% to 80%.

Recall is better when practice uses activities similar to those used in testing. For example, writing personal journal entries helps with multiple-choice quizzes, but less than doing practice quizzes [Mill2016a]. This phenomenon is called transfer-appropriate processing.

One way to exercise retrieval skills is to solve problems twice. The first time, do it entirely from memory without notes or discussion with peers. After grading your own work against a rubric supplied by the teacher, solve the problem again using whatever resources you want. The difference between the two shows you how well you were able to retrieve and apply knowledge.

Another method (mentioned above) is to create flash cards. Physical cards have a question or other prompt on one side and the answer on the other, and many flash card apps are available for phones. If you are studying as part of a group, swapping flash cards with a partner helps you discover important ideas that you may have missed or misunderstood.

Read-cover-retrieve is a quick alternative to flash cards. As you read something, cover up key terms or sections with small sticky notes. When you are done, go through it a second time and see how well you can guess what’s under each of those stickies. Whatever method you use, don’t just practice recalling facts and definitions: make sure you also check your understanding of big ideas and the connections between them. Sketching a concept map and then comparing it to your notes or to a previously-drawn concept map is a quick way to do this.

Hypercorrection

One powerful finding in learning research is the hypercorrection effect [Metc2016]. Most people don’t like to be told they’re wrong, so it would be reasonable to assume that the more confident someone is in the answer they’ve given on a test, the harder it is to change their mind if they were actually wrong. It turns out that the opposite is true: the more confident someone is that they were right, the more likely they are not to repeat the error if they are corrected.

Interleaving

One way you can space your practice is to interleave study of different topics: instead of mastering one subject, then a second and third, shuffle study sessions. Even better, switch up the order: A-B-C-B-A-C is better than A-B-C-A-B-C, which in turn is better than A-A-B-B-C-C [Rohr2015]. This works because interleaving fosters creation of more links between different topics, which in turn improves recall.

How long you should spend on each item depends on the subject and how well you know it. Somewhere between 10 and 30 minutes is long enough for you to get into a state of flow (Section 5.2) but not for your mind to wander. Interleaving study will initially feel harder than focusing on one topic at a time, but that’s a sign that it’s working. If you are using flash cards or practice tests to gauge your progress, you should see improvement after only a couple of days.

Elaboration

Explaining things to yourself as you go through them helps you understand and remember them. One way to do this is to follow up each answer on a practice quiz with an explanation of why that answer is correct, or conversely with an explanation of why some other plausible answer isn’t. Another is to tell yourself how a new idea is similar to or different from one you have seen previously.

Talking to yourself may seem like an odd way to study, but [Biel1995] found that people trained in self-explanation outperformed those who hadn’t been trained. Similarly,[Chi1989] found that some learners simply halt when they hit a step they don’t understand when trying to solve problems. Others pause and generate an explanation of what’s going on, and the latter group learns faster. An exercise to build this skill is to go through an example program line by line with a class, having a different person explain each line in turn and say why it is there and what it accomplishes.

Concrete Examples

One particularly useful form of elaboration is the use of concrete examples. Whenever you have a statement of a general principle, try to provide one or more examples of its use, or conversely take each particular problem and list the general principles it embodies.[Raws2014] found that interleaving examples and definitions like this made it more likely that learners would remember the latter correctly.

One structured way to do this is the ADEPT method: give an Analogy, draw a Diagram, present an Example, describe the idea in Plain language, and then give the Technical details. Again, if you are studying with a partner or in a group, you can swap and check work: see if you agree that other people’s examples actually embody the principle being discussed or which principles are used in an example that they haven’t listed.

Another useful technique is to teach by contrast, i.e. to show learners what a solution is not or what kind of problem a technique won’t solve. For example, when showing children how to simplify fractions, it’s important to give them a few like 5/7 that can’t be simplified so that they don’t become frustrated looking for answers that don’t exist.

Dual Coding

The last of the six core strategies that the Learning Scientists describe is to present words and images together. As discussed in Section 4.1, different subsystems in our brains handle and store linguistic and visual information, so if complementary information is presented through both channels, they can reinforce one another. However, learning is less effective when the same information is presented simultaneously in two different channels, because then the brain has to expend effort to check the channels against each other [Maye2003].

One way to take advantage of dual coding is to draw or label timelines, maps, family trees, or whatever else seems appropriate to the material. (I am personally fond of pictures showing which functions call which others in a program.) Drawing a diagram without labels, then coming back later to label it, is excellent retrieval practice.

Time Management

I used to brag about the hours I was working. Not in so many words, of course—I had some social skills—but I would show up for class around noon, unshaven and yawning, and casually mention to whoever would listen that I’d been up working until 6:00 a.m.

Looking back, I can’t remember who I was trying to impress. What I remember instead is how much of the work I did in those all-nighters I threw away once I’d had some sleep, and how much damage the stuff I didn’t throw away did to my grades.

My mistake was to confuse “working” with “being productive.” You can’t produce software (or anything else) without doing some work, but you can easily do lots of work without producing anything of value. Convincing people of this can be hard, especially when they’re in their teens or twenties, but it pays tremendous dividends.

Scientific study of overwork and sleep deprivation goes back to at least the 1890s—see[Robi2005] for a short, readable summary. The most important results for learners are:

  1. Working more than 8 hours a day for an extended period of time lowers your total productivity, not just your hourly productivity—i.e. you get less done in total (not just per hour) when you’re in crunch mode.

  2. Working over 21 hours in a stretch increases the odds of you making a catastrophic error just as much as being legally drunk.

  3. Productivity varies over the course of the workday, with the greatest productivity occurring in the first 4 to 6 hours. After enough hours, productivity approaches zero; eventually it becomes negative.

These facts have been reproduced and verified for over a century, and the data behind them is as solid as the data linking smoking to lung cancer. The problem is that people usually don’t notice their abilities declining. Like drunks who think they are still able to drive, people who are deprived of sleep don’t realize that they are not finishing their sentences (or thoughts). Five 8-hour days per week has been proven to maximize long-term total output in every industry that has ever been studied; studying or programming are no different.

But what about short bursts now and then, like pulling an all-nighter to meet a deadline? That has been studied too, and the results aren’t pleasant. Your ability to think drops by 25% for each 24 hours you’re awake. Put it another way, the average person’s IQ is only 75 after one all-nighter, which puts them in the bottom 5% of the population. If you do two all-nighters in a row your effective IQ is 50, which is the level at which people are usually judged incapable of independent living.

“But—but—I have so many assignments to do!” you say. “And they’re all due at once! I have to work extra hours to get them all done!” No: people have to prioritize and focus in order to be productive, and in order to do that, they have to be taught how. One widely-used technique is to make a list of things that need to be done, sort them by priority, and then switch off email and other interruptions for 30–60 minutes and complete one of those tasks. If any task on a to-do list is more than an hour long, break it down into smaller pieces and prioritize those separately.

The most important part of this is switching off interruptions. Despite what many people want to believe, human beings are not good at multi-tasking. What we can become good at is automaticity, which is the ability to do something routine in the background while doing something else [Mill2016a]. Most of us can talk while chopping onions, or drink coffee while reading; with practice, we can also take notes while listening, but we can’t study effectively, program, or do other mentally challenging tasks while paying attention to something else—we only think we can.

The point of organizing and preparing is to get into the most productive mental state possible. Psychologists call it flow [Csik2008]; athletes call it “being in the zone,” and musicians talk about losing themselves in what they’re playing. Whatever name you use, people produce much more per unit of time in this state than normal. The bad news is that it takes roughly 10 minutes to get back into a state of flow after an interruption, no matter how short the interruption was. This means that if you are interrupted half a dozen times per hour, you are never at your productive peak.

How Did He Know?

In his 1961 short story “Harrison Bergeron,” Kurt Vonnegut described a future in which everyone is forced to be equal. Good-looking people have to wear masks, athletic people have to wear weights—and intelligent people are forced to carry around radios that interrupt their thoughts at random intervals. I sometimes wonder if—oh, hang on, my phone just—sorry, what were we talking about?

Peer Assessment

Asking people on a team to rate their peers is a common practice in industry.[Sond2012] surveyed the literature on peer assessment, distinguishing between grading and reviewing. They found that peer assessment increased the amount, diversity, and timeliness of feedback, helped learners exercise higher-level thinking, encouraged reflective practice, and supported development of social skills. The concerns were predictable: validity and reliability, motivation and procrastination, trolls, collusion, and plagiarism.

However, the evidence shows that these concerns aren’t significant in most classes. For example,[Kauf2000] compared confidential peer ratings and grades on several axes for two undergraduate engineering courses and found that self-rating and peer ratings statistically agreed, that collusion wasn’t significant (i.e. people didn’t just give all their peers high grades), that learners didn’t inflate their self-ratings, and crucially, that ratings were not biased by gender or race.

One way to implement peer assessment is contributing student pedagogy, in which learners produce artifacts to contribute to others’ learning. This can be developing a short lesson and sharing it with the class, adding to a question bank, or writing up notes from a particular lecture for in-class publication. For example,[Fran2018] found that learners who made short videos to teach concepts to their peers had a significant increase in their own learning compared to those who only studied the material or viewed the videos. I have found that asking learners to share one bug and its fix with the class every day helps their analytic abilities and reduces impostor syndrome.

Another approach is calibrated peer review, in which a learner reviews one or more examples using a rubric and compares their evaluation against the teacher’s review of the same work [Kulk2013]. Once learners’ evaluations are close enough to the teacher’s, they start evaluating their peers’ actual work. If several peers’ assessments are combined, this can be as accurate as assessment by teachers [Pare2008].

Like everything else, assessment is aided by rubrics. The evaluation form in Section 21.2 shows a sample to get you started. To use it, rank yourself and each of your teammates, then calculate and compare scores. Large disparities usually indicate a need for a longer conversation.

Exercises

Learning Strategies (individual/20)

  1. Which of the six learning strategies do you regularly use? Which ones do you not?

  2. Write down three general concepts that you want your learners to master and give two specific examples of each (concrete examples practice). For each of those concepts, work backward from one of your examples to explain how the concept explains it (elaboration).

Connecting Ideas (pairs/5)

This exercise is an example of using elaboration to improve retention. Pick a partner have each person independently choose an idea, then announce your ideas and try to find a four-link chain that leads from one to the other. For example, if the two ideas are “Saskatchewan” and “statistics,” the links might be:

  • Saskatchewan is a province of Canada;

  • Canada is a country;

  • countries have governments;

  • governments use statistics to analyze public opinion.

Convergent Evolution (pairs/15)

One practice that wasn’t covered above is guided notes, which are notes prepared by the teacher that cue learners to respond to key information in a lecture or discussion. The cues can be blank spaces where learners add information, asterisks next to terms learners should define, and so on.

Create two to four guided note cards for a lesson you have recently taught or are going to teach. Swap cards with your partner: how easy is it to understand what is being asked for? How long would it take to fill in the prompts? How well does this work for programming examples?

Changing Minds (pairs/10)

[Kirs2013] argues that myths about digital natives, learning styles, and self-educators are all reflections of the mistaken belief that learners know what is best for them, and cautions that we may be in a downward spiral in which every attempt by education researchers to rebut these myths confirms their opponents’ belief that learning science is pseudo-science. Pick one thing you have learned about learning so far in this book that surprised you or contradicted something you previously believed and practice explaining it to a partner in 1–2 minutes. How convincing are you?

Flash Cards (individual/15)

Use sticky notes or anything else you have at hand to make up half a dozen flash cards for a topic you have recently taught or learned. Trade with a partner and see how long it takes each of you to achieve 100% perfect recall. Set the cards aside when you are done, then come back after half an hour and see what your recall rate is.

Using ADEPT (whole class/15)

Pick something you have recently taught or been taught and outline a short lesson that uses the five-step ADEPT method to introduce it.

The Cost of Multi-Tasking (pairs/10)

The Learning Scientists blog describes a simple experiment you can do with only a stopwatch to demonstrate the mental cost of multi-tasking. Working in pairs, measure how long it takes each person to do each of these three tasks:

  • Count from 1 to 26 twice.

  • Recite the alphabet from A to Z twice.

  • Interleave the numbers and letters, i.e. say, “1, A, 2, B, ” and so on.

Have each pair report their numbers. Without specific practice, the third task always takes significantly longer than either of the component tasks.

Myths in Computing Education (whole class/20)

[Guzd2015b] presents a list of the top ten mistaken beliefs about computing education, which includes:

  1. The lack of women in Computer Science is just like all the other STEM fields.

  2. To get more women in CS, we need more female CS faculty.

  3. Student evaluations are the best way to evaluate teaching.

  4. Good teachers personalize education for students’ learning styles.

  5. A good CS teacher should model good software development practice because their job is to produce excellent software engineers.

  6. Some people are just naturally better programmers than others.

Have everyone vote +1 (agree), -1 (disagree), or 0 (not sure) for each point, then read the full explanations in the original article and vote again. Which ones did people change their minds on? Which ones do they still believe are true, and why?

Calibrated Peer Review (pairs/20)

  1. Create a 5–10 point rubric with entries like “good variable names,” “no redundant code,” and “properly-nested control flow” for grading the kind of programs you would like your learners to write.

  2. Choose or create a small program that contains 3–4 violations of these entries.

  3. Grade the program according to your rubric.

  4. Have a partner grade the same program with the same rubric. What do they accept that you did not? What do they critique that you did not?

Review

Concepts: Active learning

A Lesson Design Process

Most people design lessons like this:

  1. Someone asks you to teach something you barely know or haven’t thought about in years.

  2. You start writing slides to explain what you know about the subject.

  3. After 2 or 3 weeks, you make up an assignment based on what you’ve taught so far.

  4. You repeat step 3 several times.

  5. You stay awake into the wee hours of the morning to create a final exam and promise yourself that you’ll be more organized next time.

A more effective method is similar in spirit to a programming practice called test-driven development (TDD). Programmers who use TDD don’t write software and then test that it is working correctly. Instead, they write the tests first, then write just enough new software to make those tests pass.

TDD works because writing tests forces programmers to be precise about what they’re trying to accomplish and what “done” looks like. TDD also prevents endless polishing: when the tests pass, you stop coding. Finally, it reduces the risk of confirmation bias: someone who hasn’t yet written a piece of software will be more objective than someone who has just put in several hours of hard work and really, really wants to be done.

A similar method called backward design works very well for lesson design. This method was developed independently in [Wigg2005,Bigg2011,Fink2013] and is summarized in [McTi2013]. In simplified form, its steps are:

  1. Create or recycle learner personas (discussed in the next section) to figure out who you are trying to help and what will appeal to them.

  2. Brainstorm to get a rough idea of what you want to cover, how you’re going to do it, what problems or misconceptions you expect to encounter, what’s not going to be included, and so on. Drawing concept maps can help a lot at this stage (Section 3.1).

  3. Create a summative assessment (Section 2.1) to define your overall goal. This can be the final exam for a course or the capstone project for a one-day workshop; regardless of its form or size, it shows how far you hope to get more clearly than a point-form list of objectives.

  4. Create formative assessments that will give people a chance to practice the things they’re learning. These will also tell you (and them) whether they’re making progress and where they need to focus their attention. The best way to do this is to itemize the knowledge and skills used in the summative assessment you developed in the previous step and then create at least one formative assessment for each.

  5. Order the formative assessments to create a course outline based on their complexity, their dependencies, and how well topics will motivate your learners (Section 10.1).

  6. Write material to get learners from one formative assessment to the next. Each hour of instruction should consist of three to five such episodes.

  7. Write a summary description of the course to help its intended audience find it and figure out whether it’s right for them.

This method helps keep teaching focused on its objectives. It also ensures that learners don’t face anything at the end of the course that they are not prepared for.

Perverse Incentives

Backward design is not the same thing as teaching to the test. When using backward design, teachers set goals to aid in lesson design; they may never actually give the final exam that they wrote. In many school systems, on the other hand, an external authority defines assessment criteria for all learners, regardless of their individual situations. The outcomes of those summative assessments directly affect the teachers’ pay and promotion, which means teachers have an incentive to focus on having learners pass tests rather than on helping them learn.

[Gree2014] argues that focusing on testing and measurement appeals to those with the power to set the tests, but is unlikely to improve outcomes unless it is coupled with support for teachers to make improvements based on test outcomes. The latter is often missing because large organizations usually value uniformity over productivity [Scot1998].

Reverse design is described as a sequence, but it’s almost never done that way. We may, for example, change our mind about what we want to teach based on something that occurs to us while we’re writing an MCQ, or re-assess who we’re trying to help once we have a lesson outline. However, the notes we leave behind should present things in the order described above so that whoever has to use or maintain the lesson after us can retrace our thinking.[Parn1986]

Learner Personas

The first step in the reverse design process is figuring out who your audience is. One way to do this is to write two or three learner personas like those in Section 1.1. This technique is borrowed from user experience designers, who create short profiles of typical users to help them think about their audience.

A learner persona consists of:

  1. the person’s general background;

  2. what they already know;

  3. what they want to do; and

  4. any special needs they have.

The personas in Section 1.1 have the four points listed above, along with a short summary of how this book will help them. A learner persona for a volunteer group that runs weekend Python workshops might be:

  1. Jorge just moved from Costa Rica to Canada to study agricultural engineering. He has joined the college soccer team and is looking forward to learning how to play ice hockey.

  2. Other than using Excel, Word, and the internet, Jorge’s most significant previous experience with computers is helping his sister build a WordPress site for the family business back home.

  3. Jorge wants to measure properties of soil from nearby farms using a handheld device that sends data to his computer. Right now he has to open each data file in Excel, delete the first and last column, and calculate some statistics on what’s left. He has to collect at least 600 measurements in the next few months, and really doesn’t want to have to do these steps by hand for each one.

  4. Jorge can read English well, but sometimes struggles to keep up with spoken conversation that involves a lot of jargon.

Rather than writing new personas for every lesson or course, teachers usually create and share half a dozen that cover everyone they are likely to teach, then pick a few from that set to describe the audience for particular material. Personas that are used this way become a convenient shorthand for design issues: when speaking with each other, teachers can say, “Would Jorge understand why we’re doing this?” or, “What installation problems would Jorge face?”

Their Goals, Not Yours

Personas should always describe what the learner wants to do rather than what you think they actually need. Ask yourself what they are searching for online; it probably won’t include jargon that they don’t yet know, so part of what you have to do as an instructional designer is figure out how to make your lesson findable.

Learning Objectives

Formative and summative assessments help teachers figure out what they’re going to teach, but in order to communicate that to learners and other teachers, a course description should also have learning objectives. These help ensure that everyone has the same understanding of what a lesson is supposed to accomplish. For example, a statement like “understand Git” could mean any of the following:

  • Learners can describe three ways in which version control systems like Git are better than file-sharing tools like Dropbox and two ways in which they are worse.

  • Learners can commit a changed file to a Git repository using a desktop GUI tool.

  • Learners can explain what a detached HEAD is and recover from it using command-line operations.

Objectives vs. Outcomes

A learning objective is what a lesson strives to achieve. A learning outcome is what it actually achieves, i.e. what learners actually take away. The role of summative assessment is therefore to compare learning outcomes with learning objectives.

A learning objective describes how the learner will demonstrate what they have learned once they have successfully completed a lesson. More specifically, it has a measurable or verifiable verb that states what the learner will do and specifies the criteria for acceptable performance. Writing these may initially seem restrictive, but they will make you, your fellow teachers, and your learners happier in the long run: you will end up with clear guidelines for both your teaching and assessment, and your learners will appreciate having clear expectations.

One way to understand what makes for a good learning objective is to see how a poor one can be improved:

  • The learner will be given opportunities to learn good programming practices.
    This describes the lesson’s content, not the attributes of successful learners.

  • The learner will have a better appreciation for good programming practices.
    This doesn’t start with an active verb or define the level of learning, and the subject of learning has no context and is not specific.

  • The learner will understand how to program in R.
    While this starts with an active verb, it doesn’t define the level of learning and the subject of learning is still too vague for assessment.

  • The learner will write one-page data analysis scripts to read, filter, and summarize tabular data using R.
    This starts with an active verb, defines the level of learning, and provides context to ensure that outcomes can be assessed.

When it comes to choosing verbs, many teachers use Bloom’s Taxonomy. First published in 1956 and updated at the turn of the century [Ande2001], it is a widely used framework for discussing levels of understanding. Its most recent form has six categories; the list below gives a few of the verbs typically used in learning objectives written for each:

Remembering:

Exhibit memory of previously learned material by recalling facts, terms, basic concepts, and answers. (recognize, list, describe, name, find)

Understanding:

Demonstrate understanding of facts and ideas by organizing, comparing, translating, interpreting, giving descriptions, and stating main ideas. (interpret, summarize, paraphrase, classify, explain)

Applying:

Solve new problems by applying acquired knowledge, facts, techniques and rules in a different way. (build, identify, use, plan, select)

Analyzing:

Examine and break information into parts by identifying motives or causes; make inferences and find evidence to support generalizations. (compare, contrast, simplify)

Evaluating:

Present and defend opinions by making judgments about information, validity of ideas, or quality of work based on a set of criteria. (check, choose, critique, prove, rate)

Creating:

Compile information together in a different way by combining elements in a new pattern or proposing alternative solutions. (design, construct, improve, adapt, maximize, solve)

Bloom’s Taxonomy appears in almost every textbook on education, but [Masa2018] found that even experienced educators have trouble agreeing on how to classify specific things. The verbs are still useful, though, as is the notion of building understanding in steps: as Daniel Willingham has said, people can’t think without something to think about [Will2010], and this taxonomy can help teachers ensure that learners have those somethings when they need them.

Another way to think about learning objectives comes from [Fink2013], which defines learning in terms of the change it is meant to produce in the learner. Fink’s Taxonomy also has six categories, but unlike Bloom’s they are complementary rather than hierarchical:

Foundational Knowledge:

understanding and remembering information and ideas. (remember, understand, identify)

Application:

skills, critical thinking, managing projects. (use, solve, calculate, create)

Integration:

connecting ideas, learning experiences, and real life. (connect, relate, compare)

Human Dimension:

learning about oneself and others. (come to see themselves as, understand others in terms of, decide to become)

Caring:

developing new feelings, interests, and values. (get excited about, be ready to, value)

Learning How to Learn:

becoming a better learner. (identify source of information for, frame useful questions about)

A set of learning objectives based on this taxonomy for an introductory course on HTML and CSS might be:

  • Explain what CSS properties are and how CSS selectors work.

  • Style a web page using common tags and CSS properties.

  • Compare and contrast writing HTML and CSS to writing with desktop publishing tools.

  • Identify and correct issues in sample web pages that would make them difficult for the visually impaired to interact with.

  • Describe features of favorite web sites whose design particularly appeals to you and explain why.

  • Describe your two favorite online sources of information about CSS and explain what you like about them.

Maintainability

Once a lesson has been created someone needs to maintain it, and doing that is a lot easier if it has been built in a maintainable way. But what exactly does “maintainable” mean? The short answer is that a lesson is maintainable if it’s cheaper to update it than to replace it. This equation depends on four factors:

How well documented the course’s design is.

If the person doing maintenance doesn’t know (or doesn’t remember) what the lesson is supposed to accomplish or why topics are introduced in a particular order, it will take them more time to update it. One reason to use reverse design is to capture decisions about why each course is the way it is.

How easy it is for collaborators to collaborate technically.

Teachers usually share material by mailing PowerPoint files to each other or by putting them in a shared drive. Collaborative writing tools like Google Docs and wikis are a big improvement, as they allow many people to update the same document and comment on other people’s updates. The version control systems used by programmers, such as GitHub, are another approach. They let any number of people work independently and then merge their changes in a controlled, reviewable way. Unfortunately, version control systems have a steep learning curve and don’t handle common office document formats.

How willing people are to collaborate.

The tools needed to build a Wikipedia for lessons have been around for twenty years, but most teachers still don’t write and share lessons the way that they write and share encyclopedia entries.

How useful sharing actually is.

The Reusability Paradox states that the more reusable a learning object is, the less pedagogically effective it is [Wile2002]. The reason is that a good lesson resembles a novel more than it does a program: its parts are tightly coupled rather than independent black boxes. Direct re-use may therefore be the wrong goal for lessons; we might get further by trying to make them easier to remix.

If the Reusability Paradox is true, collaboration will be more likely if the things being collaborated on are small. This fits well with Mike Caulfield’s theory of choral explanations, which argues that sites like Stack Overflow succeed because they provide a chorus of answers for every question, each of which is most suitable for a slightly different questioner. If this is right, the lessons of tomorrow may be guided tours of community-curated Q&A repositories designed for learners at widely different levels.

Exercises

Create Learner Personas (small groups/30)

Working in small groups, create a 4-point persona that describes one of your typical learners.

Classify Learning Objectives (pairs/10)

Look at the example learning objectives for an introductory course on HTML and CSS in Section 6.2 and classify each according to Bloom’s Taxonomy. Compare your answers with those of your partner. Where did you agree and disagree?

Write Learning Objectives (pairs/20)

Write one or more learning objectives for something you currently teach or plan to teach using Bloom’s Taxonomy. Working with a partner, critique and improve the objectives. Does each one have a verifiable verb and clearly state criteria for acceptable performance?

Write More Learning Objectives (pairs/20)

Write one or more learning objectives for something you currently teach or plan to teach using Fink’s Taxonomy. Working with a partner, critique and improve the objectives.

Help Me Do It By Myself (small groups/15)

The educational theorist Lev Vygotsky introduced the notion of a Zone of Proximal Development (ZPD), which includes the problems that people cannot yet solve on their own but are able to solve with help from a mentor. These are the problems that are most fruitful to tackle next, as they are out of reach but attainable.

Working in small groups, choose one learner persona you have developed and describe two or three problems that are in that learner’s ZPD.

Building Lessons by Subtracting Complexity (individual/20)

One way to build a programming lesson is to write the program you want learners to finish with, then remove the most complex part that you want them to write and make it the last exercise. You can then remove the next most complex part you want them to write and make it the penultimate exercise, and so on. Anything that’s left after you have pulled out the exercises, such as loading libraries or reading data, becomes the starter code that you give them.

Take a program or web page that you want your learners to be able to create and work backward to break it into digestible parts. How many are there? What key idea is introduced by each one?

Inessential Weirdness (individual/15)

Betsy Leondar-Wright coined the phrase “inessential weirdness” to describe things groups do that aren’t really necessary, but which alienate people who aren’t yet members of that group. Sumana Harihareswara later used this notion as the basis for a talk on inessential weirdnesses in open source software, which includes things like using command-line tools with cryptic names. Take a few minutes to read these articles, then make a list of inessential weirdnesses you think your learners might encounter when you first teach them. How many of these can you avoid?

PETE (individual/15)

One pattern that works well for programming lessons is PETE: introduce the Problem, work through an Example, explain the Theory, and then Elaborate on a second example so that learners can see what is specific to each case and what applies to all cases. Pick something you have recently taught or been taught and outline a short lesson for it that follows these five steps.

PRIMM (individual/15)

Another lesson pattern is PRIMM [Sent2019]: Predict a program’s behavior or output, Run it to see what it actually does, Investigate why it does that by stepping through it in a debugger or drawing the flow of control, Modify it (or its inputs), and then Make something similar from scratch. Pick something you have recently taught or been taught and outline a short lesson for it that follows these five steps.

Concrete-Representational-Abstract (pairs/15)

Concrete-Representational-Abstract (CRA) is an approach to introducing new ideas that is used primarily with younger learners: physically manipulate a Concrete object, Represent the object with an image, then perform the same operations using numbers, symbols, or some other Abstraction.

  1. Write each of the numbers 2, 7, 5, 10, 6 on a sticky note.

  2. Simulate a loop that finds the largest value by looking at each in turn (concrete).

  3. Sketch a diagram of the process you used, labeling each step (representational).

  4. Write instructions that someone else could follow to go through the same steps (abstract).

Compare your representational and abstract materials with your partner’s.

Evaluating a Lesson Repository (small groups/10)

[Leak2017] explores why computer science teachers don’t use lesson sharing sites and recommends ways to make them more appealing:

  1. The landing page should allow site visitors to identify their background and their interests in visiting the site. Sites should ask two questions: “What is your current role?” and “What course and grade level are you interested in?”

  2. Sites should display all learning resources in the context of the full course so that potential users can understand their intended context of use.

  3. Many teachers have concerns about having their (lack of) knowledge judged by their peers if they post to sites’ discussion forums. These forums should therefore allow anonymous posting.

In small groups, discuss whether these three features would be enough to convince you to use a lesson sharing site, and if not, what would.

Review

Concepts: Learner personas

Pedagogical Content Knowledge

Every teacher needs three things:

content knowledge

such as how to program;

general pedagogical knowledge

such as an understanding of the psychology of learning; and

pedagogical content knowledge

(PCK), which is the domain-specific knowledge of how to teach a particular concept to a particular audience. In computing, PCK includes things like what examples to use when teaching how parameters are passed to a function or what misconceptions about nesting HTML tags are most common.

We can add technical knowledge to this mix [Koeh2013], but that doesn’t change the key point: it isn’t enough to know the subject and how to teach—you have to know how to teach that particular subject [Maye2004]. This chapter therefore summarizes some results from computing education research that will add to your store of PCK.

As with all research, some caution is required when interpreting results:

Theories change as more data becomes available.

Computing education research (CER) is a young discipline: the American Society for Engineering Education was founded in 1893 and the National Council of Teachers of Mathematics in 1920, but the Computer Science Teachers Association didn’t exist until 2005. While a steady stream of new insights are reported at conferences like SIGCSE, ITiCSE, and ICER, we simply don’t know as much about learning to program as we do about learning to read, play a sport, or do basic arithmetic.

Most of these studies’ subjects are WEIRD:

they are from Western, Education, Industrialized, Rich, and Democratic societies [Henr2010]. What’s more, they are also mostly young and in institutional classrooms, since that’s the population most researchers have easiest access to. We know much less about adults, members of marginalized groups, learners in free-range settings, or end-user programmers, even though there are far more of them.

If this was an academic treatise, I would therefore preface most claims with qualifiers like, “Some research may seem to indicate that” But since actual teachers in actual classrooms have to make decisions regardless of whether research has clear answers yet or not, this chapter presents actionable best guesses rather than nuanced perhapses.

Jargon

Like any specialty, CER has jargon. CS1 refers to an introductory semester-long course in which learners meet variables, loops, and functions for the first time, while CS2 refers to a second course that covers basic data structures like stacks and queues, and CS0 refers to an introductory course for people without any prior experience who aren’t intending to continue with computing right away. Full definitions for these terms can be found in the ACM Curriculum Guidelines.

What Are We Teaching Them Now?

Very little is known about what coding bootcamps and other free-range initiatives teach, in part because many are reluctant to share their curriculum. We know more about what is taught by institutions [Luxt2017]:

Topic % Topic %
Programming Process 87% Data Types 23%
Abstract Programming Thinking 63% Input/Output 17%
Data Structures 40% Libraries 15%
Object-Oriented Concepts 36% Variables & Assignment 14%
Control Structures 33% Recursion 10%
Operations & Functions 26% Pointers & Memory Management 5%

High-level topic labels like these can hide a multitude of sins. A more tangible result comes from,[Rich2017] which reviewed a hundred articles to find learning trajectories for computing classes in elementary and middle schools. Their results for sequencing, repetition, and conditionals are essentially collective concept maps that combine and rationalize the implicit and explicit thinking of many different educators (Figure [f:pck-trajectory]).

Learning trajectory for conditions (from [Rich2017])

How Much Are They Learning?

There can be a world of difference between what teachers teach and how much learners learn. To explore the latter, we must use other measures or do direct studies. Taking the former approach, roughly two-thirds of post-secondary students pass their first computing course, with some variations depending on class size and so on, but with no significant differences over time or based on language [Benn2007a,Wats2014].

How does prior experience affect these results? To find out,[Wilc2018] compared the performance and confidence of novices with and without prior programming experience in CS1 and CS2 (see below). They found that novices with prior experience outscored novices without by 10% in CS1, but those differences disappeared by the end of CS2. They also found that women with prior exposure outperformed their male peers in all areas, but were consistently less confident in their abilities (Section 10.4).

As for direct studies of how much novices learn,[McCr2001] presented a multi-site international study that was later replicated by [Utti2013]. According to the first study, “the disappointing results suggest that many students do not know how to program at the conclusion of their introductory courses.” More specifically, “For a combined sample of 216 students from four universities, the average score was 22.89 out of 110 points on the general evaluation criteria developed for this study.” This result may say as much about teachers’ expectations as it does about student ability, but either way, our first recommendation is to measure and track results in ways that can be compared over time so that you can tell if your lessons are becoming more or less effective.

What Misconceptions Do Novices Have?

Chapter 2 explained why clearing up novices’ misconceptions is just as important as teaching them strategies for solving problems. The biggest misconception novices have—sometimes called the “superbug” in coding—is the belief that computers understand what people mean in the way that another human being would [Pea1986]. Our second recommendation is therefore to teach novices that computers don’t understand programs, i.e. that calling a variable “cost” doesn’t guarantee that its value is actually a cost.

[Sorv2018] presents over forty other misconceptions that teachers can also try to clear up, many of which are also discussed in [Qian2017]’s survey. One is the belief that variables in programs work the same way they do in spreadsheets, i.e. that after executing:

grade = 65
total = grade + 10
grade = 80
print(total)

the value of total will be 90 rather than 75 [Kohn2017]. This is an example of the way in which novices construct a plausible-but-wrong mental model by making analogies; other misconceptions include:

  • A variable holds the history of the values it has been assigned, i.e. it remembers what its value used to be.

  • Two objects with the same value for a name or id attribute are guaranteed to be the same object.

  • Functions are executed as they are defined, or are executed in the order in which they are defined.

  • A while loop’s condition is constantly evaluated, and the loop stops as soon as it becomes false. Conversely, the conditions in if statements are also constantly evaluated, and their statements are executed as soon as the condition becomes true regardless of where the flow of control is at the time.

  • Assignment moves values, i.e. after a = b, the variable b is empty.

What Mistakes Do Novices Make?

The mistakes novices make can tell us what to prioritize in our teaching, but it turns out that most teachers don’t know how common different kinds of mistakes actually are. The largest study of this is,[Brow2017] which found that mismatched quotes and parentheses are the most common type of errors in novice Java programs, but also the easiest to fix, while some mistakes (like putting the condition of an if in {} instead of ()) are most often made only once. Unsurprisingly, mistakes that produce compiler errors are fixed much faster than ones that don’t. Some mistakes, however, are made many times, like invoking methods with the wrong arguments (e.g. passing a string instead of an integer).

Not Right vs. Not Done

One difficulty in research like this is distinguishing mistakes from work in progress. For example, an empty if statement or a method that is defined but not yet used may be a sign of incomplete code rather than an error.

[Brow2017] also compared the mistakes novices actually make with what their teachers thought they made. They found that, “educators formed only a weak consensus about which mistakes are most frequent, that their rankings bore only a moderate correspondence to the students in thedata, and that educators’ experience had no effect on this level of agreement.” For example, mistaking = (assignment) and == (equality) wasn’t nearly as common as most teachers believed.

Not Just for Code

[Park2015] collected data from an online HTML editor during an introductory web development course and found that nearly all learners made syntax errors that remained unresolved weeks into the course. 20% of these errors related to the relatively complex rules that dictate when it is valid for HTML elements to be nested in one another, but 35% related to the simpler tag syntax determining how HTML elements are nested. The tendency of many teachers to say, “But the rules are simple,” is a good example of expert blind spot discussed in Chapter 3

How Do Novices Program?

[Solo1984,Solo1986] pioneered the exploration of novice and expert programming strategies. The key finding is that experts know both “what” and “how,” i.e. they understand what to put into programs and they have a set of program patterns or plans to guide their construction. Novices lack both, but most teachers focus solely on the former, even though bugs are often caused by not having a strategy for solving the problem rather than to lack of knowledge about the language. Recent work has shown the effectiveness of teaching four distinct skills in a specific order [Xie2019]:

semantics of code templates related to goals
reading 1. read code and predict behavior 3. recognize templates and their uses
writing 2. write correct syntax 4. use templates to meet goals

Our next recommendations are therefore to have learners read code, then modify it, then write it, and to introduce common patterns explicitly and have learners practice using them.[Mull2007b] is just one of many studies proving the benefits of teaching common patterns explicitly, and decomposing problems into patterns creates natural opportunities for creating and labeling subgoals [Marg2012,Marg2016].

How Do Novices Debug?

A decade ago,[McCa2008] wrote, “It is surprising how little page space is devoted to bugs and debugging in most introductory programming textbooks.” Little has changed since: there are hundreds of books on compilers and operating systems, but only a handful about debugging, and I have never seen an undergraduate course devoted to the subject.

[List2004,List2009] found that many novices struggled to predict the output of short pieces of code and to select the correct completion of the code from a set of possibilities when told what it was supposed to do. More recently,[Harr2018] found that the gap between being able to trace code and being able to write it has largely closed by CS2, but that novices who still have a gap (in either direction) are likely to do poorly.

Our fifth recommendation is therefore to explicitly teach novices how to debug.[Fitz2008,Murp2008] found that good debuggers were good programmers, but not all good programmers were good at debugging. Those who were used a symbolic debugger to step through their programs, traced execution by hand, wrote tests, and re-read the spec frequently, which are all teachable habits. However, tracing execution step by step was sometimes used ineffectively: for example, a novice might put the same print statement in both parts of an if-else. Novices would also comment out lines that were actually correct as they tried to isolate a problem; teachers can make both of these mistakes deliberately, point them out, and correct them to help novices get past them.

Teaching novices how to debug can also help make classes easier to manage.[Alqa2017] found that learners with more experience solved debugging problems significantly faster, but times varied widely: 4–10 minutes was a typical range for individual exercises, which means that some learners need 2–3 times longer than others to get through the same exercises. Teaching the slower learners what the faster ones are doing will help make the group’s overall progress more uniform.

Debugging depends on being able to read code, which multiple studies have shown is the single most effective way to find bugs [Basi1987,Keme2009,Bacc2013]. The code quality rubric developed in [Steg2014,Steg2016a] is a good checklist of things to look for, though it is best presented in chunks rather than all at once.

Having learners read code and summarize its behavior is a good exercise (Section 5.1), but often takes too long to be practical in class. Having them predict a program’s output just before it is run, on the other hand, helps reinforce learning (Section 9.11) and also gives them a natural moment to ask “what if” questions. Teachers or learners can also trace changes to variables as they go along, which[Cunn2017] found was effective (Section 12.2).

What About Testing?

Novice programmers seem just as reluctant to test software as professionals. There’s no doubt that doing it is valuable—[Cart2017] found that high-performing novices spent a lot of time testing, while low performers spent much more time working on code with errors—and many teachers require learners to write tests for assignments. But how well do they do this? One answer comes from [Bria2015], which scored learners’ programs by how many teacher-provided test cases those programs passed, and conversely scores test cases written by learners according to how many deliberately-seeded bugs they caught. They found that novices’ tests often have low coverage (i.e. they don’t test most of the code) and that they often test many things at once, which makes it hard to pinpoint the causes of errors.

Another answer comes from [Edwa2014b], which looked at all of the bugs in all novices’ code submissions combined and identified those detected by the novices’ test suite. They found that novices’ tests only detected an average of 13.6% of the faults present in the entire program population. What’s more, 90% of the novices’ tests were very similar, which indicates that novices mostly write tests to confirm that code is doing what it’s supposed to rather than to find cases where it isn’t.

One approach to teaching better testing practices is to define a programming problem by providing a set of tests to be passed rather than through a written description (Section 12.1). Before doing this, though, take a moment to look at how many tests you’ve written for your own code recently, and then decide whether you’re teaching what you believe people should do, or what they (and you) actually do.

Do Languages Matter?

The short answer is “yes”: novices learn to program faster and learn more using blocks-based tools like Scratch (Figure [f:pck-scratch]) [Wein2017]. One reason is that blocks-based systems reduce cognitive load by eliminating the possibility of syntax errors. Another is that block interfaces encourage exploration in a way that text does not: like all good tools, Scratch can be learned accidentally [Malo2010].

But what happens after blocks?[Chen2018] found that learners whose first programming language was graphical had higher grades in introductory programming courses than learners whose first language was textual when the languages were introduced in or before early adolescent years. Our sixth recommendation is therefore to start children and teens with blocks-based interfaces before moving to text-based systems. The age qualification is there because Scratch deliberately looks like it’s meant for younger users, and it can still be hard to convince adults to take it seriously.

Scratch
Scratch

Scratch has probably been studied more than any other programming tool. One example is [Aiva2016], which analyzed over 250,000 Scratch projects and found (among other things) that about 28% of projects have some blocks that are never called or triggered. As in the earlier aside about incomplete versus incorrect Java programs, the authors hypothesize that users may be using these blocks as a scratchpad to keep track of bits of code they don’t (yet) want to throw away. Another example is [Grov2017,Mlad2017], which studied novices learning about loops in Scratch, Logo, and Python. They found that misconceptions about loops are minimized when using a block-based language rather than a text-based language. What’s more, as tasks become more complex (such as using nested loops) the differences become larger.

Harder Than Necessary

The creators of programming languages make those languages harder to learn by not doing basic usability testing. For example,[Stef2013] found that, “the three most common words for looping in computer science, for, while, and foreach, were rated as the three most unintuitive choices by non-programmers.” Their work shows that C-style syntax (as used in Java and Perl) is just as hard for novices to learn as a randomly designed syntax, but that the syntax of languages such as Python and Ruby is significantly easier to learn, and the syntax of a language whose features are tested before being added to the language is easier still.[Stef2017] is a useful brief summary of what we actually know about designing programming languages and why we believe it’s true, while[Guzd2016] lays out five principles that programming languages for learners should follow.

Object-Oriented Programming

Objects and classes are power tools for experienced programmers, and many educators advocate an objects first approach to teaching programming (though they sometimes disagree on exactly what that means [Benn2007b]).[Sorv2014] describes and motivates this approach, and [Koll2015] describes three generations of tools designed to support novice programming in object-oriented environments.

Introducing objects early has a few challenges.[Mill2016b] found that most novices using Python struggled to understand self (which refers to the current object): they omitted it in method definitions, failed to use it when referencing object attributes, or both.[Rago2017] found something similar in high school students, and also found that high school teachers often weren’t clear on the concept either. On balance, we recommend that teachers start with functions rather than objects, i.e. that learners not be taught how to define classes until they understand basic control structures and data types.

Type Declarations

Programmers have argued for decades about whether variables’ data types should have to be declared or not, usually based on their personal experience as professionals rather than on any kind of data.[Endr2014,Fisc2015] found that requiring novices to declare variable types does add some complexity to programs, but it pays off fairly quickly by acting as documentation for a method’s use—in particular, by forestalling questions about what’s available and how to use it.

Variable Naming

[Kern1999] wrote, “Programmers are often encouraged to use long variable names regardless of context. This is a mistake: clarity is often achieved through brevity.” Lots of programmers believe this, but[Hofm2017] found that using full words in variable names led to an average of 19% faster comprehension compared to letters and abbreviations. In contrast,[Beni2017] found that using single-letter variable names didn’t affect novices’ ability to modify code. This may be because their programs are shorter than professionals’ or because some single-letter variable names have implicit types and meanings. For example, most programmers assume that i, j, and n are integers and that s is a string, while x, y, and z are either floating-point numbers or integers more or less equally.

How important is this?[Bink2012] reported that reading and understanding code is fundamentally different from reading prose: “the more formal structure and syntax of source code allows programmers to assimilate and comprehend parts of the code quite rapidly independent of style. In particularbeacons and program plans play a large role in comprehension.” It also found that experienced developers are relatively unaffected by identifier style, so our recommendation is just to use consistent style in all examples. Since most languages have style guides (e.g. PEP 8 for Python) and tools to check that code follows these guidelines, our full recommendation is to use tools to ensure that all code examples adhere to a consistent style.

Do Better Error Messages Help?

Incomprehensible error messages are a major source of frustration for novices (and for experienced programmers as well). Several researchers have therefore explored whether better error messages would help alleviate this. For example,[Beck2016] rewrote some of the Java compiler’s messages so that instead of:

C:\stj\Hello.java:2: error: cannot find symbol
        public static void main(string[ ] args)
^
1 error
Process terminated ... there were problems.

learners would see:

Looks like a problem on line number 2.
If "string" refers to a datatype, capitalize the 's'!

Sure enough, novices given these messages made fewer repeated errors and fewer errors overall.

[Bari2017] went further and used eye tracking to show that despite the grumblings of compiler writers, people really do read error messages—in fact, they spend 13–25% of their time doing this. However, reading error messages turns out to be as difficult as reading source code, and how difficult it is to read the error messages strongly predicts task performance. Teachers should therefore show learners how to read and interpret error messages.[Marc2011] has a rubric for responses to error messages that can be useful in grading such exercises.

Does Visualization Help?

Visualizing program execution is a perennially popular idea, and tools like the Online Python Tutor [Guo2013] and Loupe (which shows how JavaScript’s event loop works) are useful teaching aids. However, people learn more from constructing visualizations than they do from viewing visualizations constructed by others,[Stas1998,Ceti2016] so does visualization actually help learning?

To answer this,[Cunn2017] replicated an earlier study of the kinds of sketching learners do when tracing code execution. They found that not sketching at all correlates with lower success, while tracing changes to variables’ values by writing new values near their names as they change was the most effective strategy.

One possible confounding effect they checked was time: since sketchers take significantly more time to solve problems, do they do better just because they think for longer? The answer is no: there was no correlation between the time taken and the score achieved. Our recommendation is therefore to teach learners to trace variables’ values when debugging.

Flowcharts

One often-overlooked finding about visualization is that people understand flowcharts better than pseudocode if both are equally well structured [Scan1989]. Earlier work showing that pseudocode outperformed flowcharts used structured pseudocode and tangled flowcharts; when the playing field was leveled, novices did better with the graphical representation.

What Else Can We Do to Help?

[Viha2014] examined the average improvement in pass rates of various kinds of intervention in programming classes. They point out that there are many reasons to take their findings with a grain of salt: the pre-change teaching practices are rarely stated clearly, the quality of change is not judged, and only 8.3% of studies reported negative findings, so either there is positive reporting bias or the way we’re teaching right now is the worst possible and anything would be an improvement. And like many other studies discussed in this chapter, they were only looking at university classes, so their findings may not generalize to other groups.

With those caveats in mind, they found ten things teachers can do to improve outcomes (Figure [f:pck-interventions]):

Collaboration:

Activities that encourage learner collaboration either in classrooms or labs.

Content Change:

Parts of the teaching material were changed or updated.

Contextualization:

Course content and activities were aligned towards a specific context such as games or media.

CS0:

Creation of a preliminary course to be taken before the introductory programming course; could be organized only for some (e.g. at-risk) learners.

Game Theme:

A game-themed component was introduced to the course.

Grading Scheme:

A change in the grading scheme, such as increasing the weight of programming activities while reducing that of the exam.

Group Work:

Activities with increased group work commitment such as team-based learning and cooperative learning.

Media Computation:

Activities explicitly declaring the use of media computation (Chapter 10).

Peer Support:

Support by peers in form of pairs, groups, hired peer mentors or tutors.

Other Support:

An umbrella term for all support activities, e.g. increased teacher hours, additional support channels, etc.

Effectiveness of interventions
Effectiveness of interventions

This list highlights the importance of cooperative learning.[Beck2013] looked at this specifically over three academic years in courses taught by two different teachers and found significant benefits overall and for many subgroups. The cooperators had higher grades and left fewer questions blank on the final exam, which indicates greater self-efficacy and willingness to try to debug things.

Computing Without Coding

Writing code isn’t the only way to teach people how to program: having novices work on computational creativity exercises improves grades at several levels [Shel2017]. A typical exercise is to describe an everyday object (such as a paper clip or toothbrush) in terms of its inputs, outputs, and functions. This kind of teaching is sometimes called unplugged; the CS Unplugged site has lessons and exercises for doing this.

Where Next?

For those who want to go deeper,[Finc2019] is a comprehensive summary of CER,[Ihan2016] summarizes the methods that studies use most often. I hope that some day we will have catalogs like [Ojos2015] and more teacher training materials like [Hazz2014,Guzd2015a,Sent2018] to help us all do better.

Most of the research reported in this chapter was publicly funded but is locked away behind paywalls: at a guess, I broke the law 250 times to download papers from sites like Sci-Hub while writing this book. I hope the day is coming when no one will need to do that; if you are a researcher, please hasten that day by publishing your work in open access venues.

Exercises

Your Learners’ Misunderstandings (small groups/15)

Working in small groups, re-read Section 7.3 and make a list of misconceptions you think your learners have. How specific are they? How would you check how accurate your list is?

Checking for Common Errors (individual/20)

These common errors are taken from a longer list in [Sirk2012]:

Inverted assignment:

The learner assigns the value of the left-hand variable to the right-hand variable rather than the other way around.

Wrong branch:

The learner thinks the code in the body of an if is run even if the condition is false.

Executing function instead of defining it:

The learner believes that a function is executed as it is defined.

Write one exercise for each to check that learners aren’t making that mistake.

Mangled Code (pairs/15)

[Chen2017] describes exercises in which learners reconstruct code that has been mangled by removing comments, deleting or replacing lines of code, moving lines, and so on. Performance on these correlates strongly with performance on assessments in which learners write code, but these questions require less work to mark. Take the solution to a programming exercise you’ve created in the past, mangle it in two different ways, swap with a partner, and see how long it takes each of you to answer the other’s question correctly.

The Rainfall Problem (pairs/10)

[Solo1986] introduced the Rainfall Problem, which has been used in many subsequent studies of programming [Fisl2014,Simo2013,Sepp2015]. Write a program that repeatedly reads in positive integers until it reads the integer 99999. After seeing 99999, the program prints the average of the numbers seen.

  1. Solve the Rainfall Problem in the programming language of your choice.

  2. Compare your solution with that of your partner. What does yours do that theirs doesn’t and vice versa?

Roles of Variables (pairs/15)

[Kuit2004,Byck2005,Saja2006] presented a set of single-variable patterns that I have found very useful in teaching beginners:

Fixed value:

A data item that does not get a new proper value after its initialization.

Stepper:

A data item stepping through a systematic, predictable succession of values.

Walker:

A data item traversing in a data structure.

Most-recent holder:

A data item holding the latest value encountered while going through a succession of values.

Most-wanted holder:

A data item holding the best or most appropriate value encountered so far.

Gatherer:

A data item accumulating the effect of individual values.

Follower:

A data item that always gets its new value from the old value of some other data item.

One-way flag:

A two-valued data item that cannot get its initial value once the value has been changed.

Temporary:

A data item holding some value for a very short time only.

Organizer:

A data structure storing elements that can be rearranged.

Container:

A data structure storing elements that can be added and removed.

Choose a 5–15 line program and classify its variables using these categories. Compare your classifications with those of a partner. Where you disagreed, did you understand each other’s view?

What Are You Teaching? (individual/10)

Compare the topics you teach to the list developed in [Luxt2017] (Section 7.1). Which topics do you cover? Which don’t you cover? What extra topics do you cover that aren’t in their list?

Beneficial Activities (individual/10)

Look at the list of interventions developed by [Viha2014] (Section 7.10). Which of these things do you already do in your classes? Which ones could you easily add? Which ones are irrelevant?

Misconceptions and Challenges (small groups/15)

The Professional Development for CS Principles Teaching site includes a detailed list of learners’ misconceptions and exercises. Working in small groups, choose one section (such as data structures or functions) and go through their list. Which of these misconceptions do you remember having when you were a learner? Which do you still have? Which have you seen in your learners?

What Do You Care Most About? (whole class/15)

[Denn2019] asked people to propose and rate various CER questions, and found that there was no overlap between those that researchers cared most about and those that non-researchers cared most about. The researchers’ favorites were:

  1. What fundamental programming concepts are the most challenging for students?

  2. What teaching strategies are most effective when dealing with a wide range of prior experience in introductory programming classes?

  3. What affects students’ ability to generalize from simple programming examples?

  4. What teaching practices are most effective for teaching computing to children?

  5. What kinds of problems do students in programming classes find most engaging?

  6. What are the most effective ways to teach programming to various groups?

  7. What are the most effective ways to scale computing education to reach the general student population?

while the most important questions for non-researchers were:

  1. How and when is it best to give students feedback on their code to improve learning?

  2. What kinds of programming exercises are most effective when teaching students Computer Science?

  3. What are the relative merits of project-based learning, lecturing, and active learning for students learning computing?

  4. What is the most effective way to provide feedback to students in programming classes?

  5. What do people find most difficult when breaking problems down into smaller tasks while programming?

  6. What are the key concepts that students need to understand in introductory computing classes?

  7. What are the most effective ways to develop computing competency among students in non-computing disciplines?

  8. What is the best order in which to teach basic computing concepts and skills?

Have each person in the class independently give one point to each of the eight questions from the combined lists that they care most about, then calculate an average score for each question. Which ones are most popular in your class? In which group are the most popular questions?

Teaching as a Performance Art

In Darwin Among the Machines, George Dyson wrote, “In the game of life and evolution there are three players at the table: human beings, nature, and machines. I am firmly on the side of nature. But nature, I suspect, is on the side of the machines” There are similarly now three players in the game of education: textbooks and other read-only materials, live lectures, and automated online lessons. You may give your learners both written lessons and some combination of recorded video and self-paced exercises, but if you are going to teach in person you must offer something different from (and hopefully better than) either of them. This chapter therefore focuses on how to teach programming by actually doing it.

Live Coding

Teaching is theater, not cinema.
— Neal Davis

The most effective way to teach programming is live coding [Rubi2013,Haar2017,Raj2018]. Instead of presenting pre-written material, the teacher writes code in front of the class while the learners follow along, typing it in and running it as they go. Live coding works better than slides for several reasons:

  • It enables active teaching by allowing teachers to follow their learners’ interests and questions in the moment. A slide deck is like a railway track: it may be a smooth ride, but you have to decide in advance where you’re going. Live coding is more like driving an off-road vehicle: it may be bumpier, but it’s a lot easier to change direction and go where people want.

  • Watching a program being written is more motivating than watching someone page through slides.

  • It facilitates unintended knowledge transfer: people learn more than we are consciously teaching by watching how we do things.

  • It slows the teacher down: if they have to type in the program as they go along then they can only go twice as fast as their learners rather than ten times faster as they could with slides.

  • It helps reduce the load on short-term memory because it makes the teacher more aware of how much they are throwing at their learners.

  • Learners get to see how to diagnose and correct mistakes. They are going to spend a lot of time doing this; unless you’re a perfect typist, live coding ensures that they get to see how to.

  • Watching teachers make mistakes shows learners that it’s all right to make mistakes of their own. If the teacher isn’t embarrassed about making and talking about mistakes, learners will be more comfortable doing so too.

Another benefit of live coding is that it demonstrates the order in which programs should be written. When looking at how people solved Parsons Problems,[Ihan2011] found that experienced programmers often dragged the method signature to the beginning, then added the majority of the control flow (i.e. loops and conditionals), and only then added details like variable initialization and handling of corner cases. This out-of-order authoring is foreign to novices, who read and write code in the order it’s presented on the page; seeing it helps them learn to decompose problems into subgoals that can be tackled one by one. Live coding also gives teachers a chance to emphasize the importance of small steps with frequent feedback [Blik2014] and the importance of picking a plan rather than making more-or-less random changes and hoping things will get better [Spoh1985].

It takes a bit of practice to get comfortable talking while you code in front of an audience, but most people report that it quickly becomes no more difficult than talking around a deck of slides. The sections below offer tips on how to make your live coding better.

Embrace Your Mistakes

The typos are the pedagogy.
— Emily Jane McTavish

The most important rule of live coding is to embrace your mistakes. No matter how well you prepare, you will make some; when you do, think through them with your audience. While data is hard to come by, professional programmers spend anywhere from 25% to 60% of their time debugging; novices spend much more (Section 7.6), but most textbooks and tutorials spend little time diagnosing and correcting problems. If you talk aloud while you figure out what you mistyped or where you took the wrong path, and explain how you’ve corrected yourself, you will give your learners a toolbox they can use when they make their own mistakes.

Deliberate Fumbles

Once you have given a lesson several times, you’re unlikely to make anything other than basic typing mistakes (which can still be informative). You can try to remember past mistakes and make them deliberately, but that often feels forced. An alternative approach is twitch coding: ask learners one by one to tell you what to type next. This is pretty much guaranteed to get you into some kind of trouble.

Ask For Predictions

One way to keep learners engaged while you are live coding is to ask them to make predictions about what the code on the screen is going to do. You can then write down the first few suggestions they make, have the whole class vote on which they think is most likely, and then run the code. This is a lightweight form of peer instruction, which we will discuss in Section 9.2; as well as keeping their attention on task, it gives them practice at reasoning about code’s behavior.

Take It Slow

Every time you type a command, add a line of code to a program, or select an item from a menu, say what you are doing out loud and then point to what you have done and its output on the screen and go through it a second time. This allows learners to catch up and to check that what they have just done is correct. It is particularly important when some of your learners have seeing or hearing challenges or are not fluent in the language of instruction.

Whatever you do, don’t copy and paste code: doing this practically guarantees that you’ll race ahead of your learners. And if you use tab completion, say it out loud so that your learners understand what you’re doing: “Let’s use turtle dot ‘r’ ‘i’ and tab to get ‘right’.”

If the output of your command or code makes what you just typed disappear from view, scroll back up so learners can see it again. If that’s not practical, execute the same command a second time or copy and paste the last command(s) into the workshop’s shared notes.

Be Seen and Heard

When you sit down, you are more likely to look at your screen rather than at your audience and may be hidden from learners in the back rows of your classroom. If you are physically able to stand up for a couple of hours, you should therefore do so while teaching. Plan for this and make sure that you have a raised table, standing desk, or lectern for your laptop so that you don’t have to bend over to type.

Regardless of whether you are standing or sitting, make sure to move around as much as you can: go to the screen to point something out, draw something on the whiteboard, or just step away from your computer for a few moments and speak directly to your audience. Doing this draws your learners’ attention away from their screens and provides a natural time for them to ask questions.

If you are going to be teaching for more than a couple of hours, it’s worth using a microphone even in a small room. Your throat gets tired just like every other part of your body; using a mike is no different from wearing comfortable shoes (which you also ought to do). It can also make a big difference to people with hearing disabilities.

Mirror Your Learner’s Environment

You may have customized your environment with a fancy Unix shell prompt, a custom color scheme for your development environment, or a plethora of keyboard shortcuts. Your learners won’t have any of this, so try to create an environment that mirrors what they do have. Some teachers create a separate bare-bones account on their laptop or a separate teaching-only account if they’re using an online service like Scratch or GitHub. Doing this can also help prevent packages that you installed for work yesterday breaking the lesson you are supposed to teach this morning.

Use the Screen Wisely

You will usually need to enlarge your font considerably for people to read it from the back of the room, which means you can put much less on the screen than you’re used to. In many cases you will be reduced to 60–70 columns and 20–30 rows, so that you’re using a 21st century supercomputer as if it was a dumb terminal from the early 1980s.

To manage this, maximize the window you are using to teach and then ask everyone to give you a thumbs-up or thumbs-down on its readability. Use a black font on a lightly-tinted background rather than a light font on a dark background—the light tint will glare less than pure white.

Pay attention to the room lighting as well: it should not be fully dark, and there should be no lights directly on or above your projection screen. Allow a few minutes for learners to reposition their tables so that they can see clearly.

When the bottom of the projection screen is at the same height as your learners’ heads, people in the back won’t be able to see the lower parts. You can raise the bottom of your window to compensate, but this will give you even less space for typing.

If you can get a second projector and screen, use it: the extra real estate will allow you to display your code on one side and its output or behavior on the other. If second screen requires its own computer, ask a helper to control it rather than hopping back and forth between two keyboards.

Finally, if you are teaching something like the Unix shell in a console window, it’s important to tell people when you run an in-console text editor and when you return to the console prompt. Most novices have never seen a window take on multiple personalities in this way, and can quickly become confused by when you are interacting with the shell, when you are typing in the editor, and when you are dealing with an interactive prompt for Python or some other language. You can avoid this problem by using a separate window for editing; if you do this, always tell learners when you are switching focus from one to the other.

Accessibility Aids Help Everyone

Tools like Mouseposé (for Mac) and PointerFocus (for Windows) highlight the position of your mouse cursor on the screen, and screen recording tools like Camtasia and standalone applications like KeyCastr echo invisible keys like tab and Control-J as you type them. These can be a bit annoying when you first start to use them, but help your learners figure out what you’re doing.

Double Devices

Some people now use two devices when teaching: a laptop plugged into the projector for learners to see and a tablet so that they can view their own notes and the notes that the learners are taking (Section 9.7). This is more reliable than flipping back and forth between virtual desktops, though a printout of the lesson is still the most reliable backup technology.

Draw Early, Draw Often

Diagrams are always a good idea. I sometimes have a slide deck full of ones that I have prepared in advance, but building diagrams step by step helps with retention (Section 4.1) and allows you to improvise.

Avoid Distractions

Turn off any notifications you may use on your laptop, especially those from social media. Seeing messages flash by on the screen distracts you as well as your learners, and it can be awkward when a message pops up you’d rather not have others see. Again, you might want to create a second account on your computer that doesn’t have email or other tools set up at all.

Improvise—After You Know the Material

Stick fairly closely to the lesson plan you’ve drawn up or borrowed the first time you teach a lesson. It may be tempting to deviate from the material because you would like to show a neat trick or demonstrate another way to do something, but there is a fair chance you’ll run into something unexpected that will cost you more time than you have.

Once you are more familiar with the material, though, you can and should start improvising based on the backgrounds of your learners, their questions in class, and what you personally find most interesting. This is like playing a new song: you stick to the sheet music the first few times, but after you’re comfortable with the melody and chord changes, you can start to put your own stamp on it.

When you want to use something new, run through it beforehand using the same computer that you’ll be teaching on: installing several hundred megabytes of software over high school WiFi in front of bored 16-year-olds isn’t something you ever want to have to do.

Direct Instruction

Direct Instruction (DI) is a teaching method centered around meticulous curriculum design delivered through a prescribed script. It’s more like an actor reciting lines than it is like the improvisational approach we recommend.[Stoc2018] found a statistically significant positive effect for DI even though it sometimes gets knocked for being mechanical. I prefer improvisation because DI requires more up-front preparation than most free-range learning groups can afford.

Face the Screen—Occasionally

It’s OK to face the projection screen occasionally when you are walking through a section of code or drawing a diagram: not looking at a roomful of people who are all looking at you can help lower your anxiety levels and give you a moment to think about what to say next.

You shouldn’t do this for more than a few seconds at a time, though. A good rule of thumb is to treat the projection screen as one of your learners: if it would be uncomfortable to stare at someone for as long as you are spending looking at the screen, it’s time to turn around and face your class again.

Drawbacks

Live coding does have some drawbacks, but they can all be avoided or worked around with a little bit of practice. If you find that you are making too many trivial typing mistakes, set aside five minutes every day to practice typing: it will help your day-to-day work as well. If you think you are spending too much time referring to your lesson notes, break them into smaller pieces so that you only ever have to think about one small step at a time.

And if you feel that you’re spending too much time typing in library import statements, class headers, and other boilerplate code, give yourself and your learners some skeleton code as a starting point (Section 9.9). Doing this will also reduce their cognitive load, since it will focus their attention where you want it.

Lesson Study

From politicians to researchers and teachers themselves, educational reformers have designed systems to find and promote people who can teach well and eliminate those who cannot. But the assumption that some people are born teachers is wrong: instead, like any other performance art, the keys to better teaching are practice and collaboration. As [Gree2014] explains, the Japanese approach to this is called jugyokenkyu, which means “lesson study”:

In order to graduate, [Japanese] education majors not only had to watch their assigned master teacher work, they had to effectively replace him, installing themselves in his classroom first as observers and then, by the third week, as a wobblyapproximation of the teacher himself. It worked like a kind of teaching relay. Each trainee took a subject, planning five days’ worth of lessons [and then] each took a day. To pass the baton, you had to teach a day’s lesson in every single subject: the one you planned and the four you did not and you had to do it right under your master teacher’s nose. Afterward, everyone—the teacher, the college students, and sometimes even another outside observer—would sit around a formal table to talk about what they saw.

Putting work under a microscope in order to improve it is commonplace in fields as diverse as manufacturing and music. A professional musician, for example, will dissect half a dozen recordings of “Body and Soul” or “Smells Like Teen Spirit” before performing it. They would also expect to get feedback from fellow musicians during practice and after performances.

But continuous feedback isn’t part of teaching culture in most English-speaking countries. There, what happens in the classroom stays in the classroom: teachers don’t watch each other’s lessons on a regular basis, so they can’t borrow each other’s good ideas. Teachers may get lesson plans and assignments from colleagues, the school board or a textbook publisher, or go through a few MOOCs on the internet, but each one has to figure out how to deliver specific lessons in specific classrooms for specific learners. This is particularly true for volunteers and other free-range teachers involved in after-school workshops and bootcamps.

Writing up new techniques and giving demonstration lessons (in which one person teaches actual learners while other teachers observe) are not solutions. For example,[Finc2007,Finc2012] found that of the 99 change stories analyzed, teachers only searched actively for new practices or materials in three cases, and only consulted published material in eight. Most changes occurred locally, without input from outside sources, or involved only personal interaction with other educators.[Bark2015] found something similar:

Adoption is not a “rational action”but an iterative series of decisions made in a social context, relying on normative traditions, social cueing, and emotional or intuitive processes Faculty are not likely to use educational research findings as the basis for adoption decisions Positive student feedback is taken as strong evidence by faculty that they should continue a practice.

Jugyokenkyu works because it maximizes the opportunity for unplanned knowledge transfer between teachers: someone sets out to demonstrate X, but while watching them, their audience actually learns Y as well (or instead). For example, a teacher might intend to show learners how to search for email addresses in a text file, but what their audience might take away is some new keyboard shortcuts.

Giving and Getting Feedback on Teaching

Observing someone helps you, and giving them feedback helps them, but it can be hard to receive feedback, especially when it’s negative (Figure [f:performance-feedback-feelings]).

Feedback feelings (copyright © Deathbulge 2013)
Feedback feelings (copyright © Deathbulge 2013)

Feedback is easier to give and receive when both parties share expectations about what is and isn’t in scope and about how comments ought to be phrased. If you are the person asking for feedback:

Initiate feedback.

It’s better to ask for feedback than to receive it unwillingly.

Choose your own questions,

i.e. ask for specific feedback. It’s a lot harder for someone to answer, “What do you think?” than to answer either, “Was I speaking too quickly?” or , “What is one thing from this lesson I should keep doing?” Directing feedback like this is also more helpful to you. It’s always better to try to fix one thing at once than to change everything and hope it’s for the better. Directing feedback at something you have chosen to work on helps you stay focused, which in turn increases the odds that you’ll see progress.

Use a feedback translator.

Have someone else read over all the feedback and give you a summary. It can be easier to hear, “Several people think you could speed up a little,” than to read several notes all saying, “This is too slow” or, “This is boring.”

Be kind to yourself.

Many of us are very critical of ourselves, so it’s always helpful to jot down what we thought of ourselves before getting feedback from others. That allows us to compare what we think of our performance with what others think, which in turn allows us to scale the former more accurately. For example, it’s very common for people to think that they’re saying “um” and “err” too often when their audience doesn’t notice it. Getting that feedback once allows teachers to adjust their assessment of themselves the next time they feel that way.

You can give feedback to others more effectively as well:

Interact.

Staring at someone is a good way to make them feel uncomfortable, so if you want to give feedback on how someone normally teaches, you need to set them at ease. Interacting with them the way that a real learner would is a good way to do this, so ask questions or (pretend to) type along with their example. If you are part of a group, have one or two people play the role of learner while the others take notes.

Balance positive and negative feedback.

The “compliment sandwich” made up of one positive comment, one negative, and a second positive becomes tiresome pretty quickly, but it’s important to tell people what they should keep doing as well as what they should change195.

Take notes.

You won’t remember everything you noticed if the presentation lasts longer than a few seconds, and you definitely won’t recall how often you noticed them. Make a note the first time something happens and then add a tick mark when it happens again so that you can sort your feedback by frequency.

Taking notes is more efficient when you have some kind of rubric so that you’re not scrambling to write your observations while the person you’re observing is still talking. The simplest rubric for free-form comments from a group is a 2x2 grid whose vertical axis is labeled “what went well” and “what can be improved”, and whose horizontal axis is labeled “content” (what was said) and “presentation” (how it was said). Observers write their comments on sticky notes as they watch the demonstration, then post those in the quadrants of a grid drawn on a whiteboard (Figure [f:performance-rubric]).

Teaching rubric

Rubrics and Question Budgets

Section 21.1 contains a sample rubric for assessing 5–10 minutes of programming instruction. It presents items in more or less the order that they’re likely to come up, e.g. questions about the introduction come before questions about the conclusion.

Rubrics like this one tend to grow over time as people think of things they’d like to add. A good way to keep them manageable is to insist that the total length stays constant: if someone wants to add a question, they have to identify one that’s less important and can be removed.

If you are interested in giving and getting feedback,[Gorm2014] has good advice that you can use to make peer-to-peer feedback a routine part of your teaching, while [Gawa2011] looks at the value of having a coach.

Studio Classes

Architecture schools often include studio classes in which students solve small design problems and get feedback from their peers right then and there. These classes are most effective when the teacher critiques the peer critiques so that participants learn not only how to make buildings but how to give and get feedback [Scho1984]. Master classes in music serve a similar purpose, and I have found that giving feedback on feedback helps people improve their teaching as well.

How to Practice Performance

The best way to improve your in-person lesson delivery is to watch yourself do it:

  • Work in groups of three.

  • Each person rotates through the roles of teacher, audience, and videographer. The teacher has 2 minutes to explain something. The person pretending to be the audience is there to be attentive, while the videographer records the session using a cellphone or other handheld device.

  • After everyone has finished teaching, the whole group watches the videos together. Everyone gives feedback on all three videos, i.e. people give feedback on themselves as well as on others.

  • After the videos have been discussed, they are deleted. (Many people are justifiably uncomfortable about images of themselves appearing online.)

  • Finally, the whole class reconvenes and adds all the feedback to a shared 2x2 grid of the kind described above without saying who each item of feedback is about.

In order for this exercise to work well:

  • Record all three videos and then watch all three. If the cycle is teach-review-teach-review, the last person to teach invariably runs short of time (sometimes on purpose). Doing all the reviewing after all the teaching also helps put a bit of distance between the two, which makes the exercise slightly less excruciating.

  • Let people know at the start of the class that they will be asked to teach something so that they have time to choose a topic. Telling them this too far in advance can be counter-productive, since some people will fret over how much they should prepare.

  • Groups must be physically separated to reduce audio cross-talk between their recordings. In practice, this means 2–3 groups in a normal-sized classroom, with the rest using nearby breakout spaces, coffee lounges, offices, or (on one occasion) a janitor’s storage closet.

  • People must give feedback on themselves as well as on each other so that they can calibrate their impressions of their own teaching against those of other people. Most people are harder on themselves than they ought to be, and it’s important for them to realize this.

The announcement of this exercise is often greeted with groans and apprehension, since few people enjoy seeing or hearing themselves. However, those same people consistently rate it as one of the most valuable parts of teaching workshops. It’s also good preparation for co-teaching (Section 9.3): teachers find it a lot easier to give each other informal feedback if they have had some practice doing so and have a shared rubric to set expectations.

And speaking of rubrics: once the class has put all of their feedback on a shared grid, pick a handful of positive and negative comments, write them up as a checklist, and have them do the exercise again. Most people are more comfortable the second time around, and being assessed on the things that they themselves have decided are important increases their sense of self-determination (Chapter 10).

Tells

We all have nervous habits: we talk more rapidly and in a higher-pitched voice than usual when we’re on stage, play with our hair, or crack our knuckles. Gamblers call these “tells,” and people often don’t realize that they pace, look at their shoes, or rattle the change in their pocket when they don’t actually know the answer to a question.

You can’t get rid of tells completely, and trying to do so can make you obsess about them. A better strategy is to try to displace them—for example, to train yourself to scrunch your toes inside your shoes when you’re nervous instead of cleaning your ear with your pinky finger.

Exercises

Give Feedback on Bad Teaching (whole class/20)

As a group, watch this video of bad teaching and give feedback on two axes: positive vs. negative and content vs. presentation. Have each person in the class add one point to a 2x2 grid on a whiteboard or in the shared notes without duplicating any points. What did other people see that you missed? What did they think that you strongly agree or disagree with?

Practice Giving Feedback (small groups/45)

Use the process described in Section 8.4 to practice teaching in groups of three and pool feedback.

The Bad and the Good (whole class/20)

Watch the videos of live coding done poorly and live coding done well and summarize your feedback on the differences using the usual 2x2 grid. How is the second round of teaching better than the first? Is there anything that was better in the first than in the second?

See, Then Do (pairs/30)

Teach 3–4 minutes of a lesson using live coding to a classmate, then swap and watch while that person live codes for you. Don’t bother trying to record these sessions—it’s difficult to capture both the person and the screen with a handheld device—but give feedback the same way you have previously. Explain in advance to your fellow trainee what you will be teaching and what the learners you teach it to are expected to be familiar with.

  • What felt different about live coding compared to standing up and lecturing? What was easier or harder?

  • Did you make any mistakes? If so, how did you handle them?

  • Did you talk and type at the same time, or alternate?

  • How often did you point at the screen? How often did you highlight with the mouse?

  • What will you try to keep doing next time? What will you try to do differently?

Tells (small groups/15)

  1. Make a note of what you think your tells are, but do not share them with other people.

  2. Teach a short lesson (2–3 minutes long).

  3. Ask your audience how they think you betray nervousness. Is their list the same as yours?

Teaching Tips (small groups/15)

The CS Teaching Tips site has a large number of practical tips on teaching computing, as well as a collection of downloadable tip sheets. Go through their tip sheets in small groups and classify each tip according to whether you use it all the time, use it occasionally, or never use it. Where do your practice and your peers’ practice differ? Are there any tips you strongly disagree with or think would be ineffective?

Review

Concepts: Feedback

In the Classroom

The previous chapter described how to practice lesson delivery and described one method—live coding—that allows teachers to adapt to their learners’ pace and interests. This chapter describes other practices that are also useful in programming classes.

Before describing them, it’s worth pausing for a moment to set expectations. The best teaching method we know is individual tutoring:[Bloo1984] found that students taught one-to-one did two standard deviations better than those who learned through conventional lecture, i.e. that individually tutored students outperformed 98% of students who were lectured to. However, while mentoring and apprenticeship were the most common ways to pass on knowledge throughout most of history, the industrialization of formal education has made it the exception today. Despite the hype around artificial intelligence, it isn’t going to square this circle any time soon, so every method described below is essentially an attempt to approach the effectiveness of individual tutoring at scale.

Enforce the Code of Conduct

The most important thing I’ve learned about teaching in the last 30 years is how important it is for everyone to treat everyone else with respect, both in and out of class. If you use this material in any way, please adopt a Code of Conduct like the one in Appendix 17 and require everyone who takes part in your classes to abide by it. It can’t stop people from being offensive, any more than laws against theft stop people from stealing, but it can make expectations and consequences clear, and signal that you are trying to make your class welcoming to all.

But a Code of Conduct is only useful if it is enforced. If you believe that someone has violated yours, you may warn them, ask them to apologize, and/or expel them, depending on the severity of the violation and whether or not you believe it was intentional. Whatever you do:

Do it in front of witnesses.

Most people will tone down their language and hostility in front of an audience, and having someone else present ensures that later discussion doesn’t degenerate into conflicting claims about who said what.

If you expel someone, say so to the rest of the class and explain why.

This helps prevent rumors from spreading and shows that your Code of Conduct actually means something.

Email the offender as soon as you can

to summarize what happened and the steps you took, and copy the message to your workshop’s hosts or one of your fellow teachers so that there’s a contemporaneous record of the conversation. If the offender replies, don’t engage in a long debate: it’s never productive.

What happens outside of class matters at least as much as what happens within it [Part2011], so you need to provide a way for learners to report problems that you aren’t there to see yourself. One step is to ask someone who isn’t part of your group to be the first point of contact; that way, if someone wants to make a complaint about you or one of your fellow teachers, they have some assurance of confidentiality and independent action.[Auro2019] has lots of other advice and is both short and practical.

Peer Instruction

No matter how good a teacher is, they can only say one thing at a time. How then can they clear up many different misconceptions in a reasonable time? The best solution developed so far is a technique called peer instruction. Originally created by Eric Mazur at Harvard [Mazu1996], it has been studied extensively in a wide variety of contexts, including programming [Crou2001,Port2013], and [Port2016] found that learners value peer instruction even at first contact.

Peer instruction attempts to provide one-to-one instruction in a scalable way by interleaving formative assessment with learner discussion:

  1. Give a brief introduction to the topic.

  2. Give learners a multiple choice question that probes for their misconceptions (rather than testing simple factual knowledge).

  3. Have all the learners vote on their answers to the MCQ.

    • If the learners all have the right answer, move on.

    • If they all have the same wrong answer, address that specific misconception.

    • If they have a mix of right and wrong answers, give them several minutes to argue with each other in groups of 2–4, then vote again.

As this video shows, group discussion significantly improves learners’ understanding because it uncovers gaps in their reasoning and forces them to clarify their thinking. Re-polling the class then lets the teacher know if they can move on or if further explanation is necessary. A final round of additional explanation after the correct answer is presented gives learners one more chance to solidify their understanding.

But could this be a false positive? Are results improving because of increased understanding during discussion or simply from a follow-the-leader effect (“vote like Jane, she’s always right”)?[Smit2009] tested this by following the first question with a second one that learners answered individually. They found that peer discussion actually does enhance understanding, even when none of the learners in a discussion group originally knew the correct answer. As long as there is a diversity of opinion within the group, their misconceptions cancel out.

Taking a Stand

It is important to have learners vote publicly so that they can’t change their minds afterward and rationalize it by making excuses to themselves like “I just misread the question.” Much of the value of peer instruction comes from the hypercorrection of getting the wrong answer and having to think through the reasons why (Section 5.1).

Teach Together

Co-teaching describes any situation in which two teachers work together in the same classroom.[Frie2016] describes several ways to do this:

Team teaching:

Both teachers deliver a single stream of content in tandem, taking turns like musicians taking solos.

Teach and assist:

Teacher A teaches while Teacher B moves around the classroom to help struggling learners.

Alternative teaching:

Teacher A provides a small set of learners with more intensive or specialized instruction while Teacher B delivers a general lesson to the main group.

Teach and observe:

Teacher A teaches while Teacher B observes the learners, collecting data on their understanding to help plan future lessons.

Parallel teaching:

The class is divided in two and the teachers present the same material simultaneously to each.

Station teaching:

The learners are divided into small groups that rotate from one station or activity to the next while teachers supervise where needed.

All of these models create more opportunities for unintended knowledge transfer than teaching alone. Team teaching is particularly beneficial in day-long workshops: it gives each teacher’s voice a chance to rest and reduces the risk that they will be so tired by the end of the day that they will start snapping at their learners or fumbling at their keyboard.

Helping

Many people who aren’t comfortable teaching are willing and able to provide in-class technical support. They can help learners with setup and installation, answer technical questions during exercises, monitor the room to spot people who may need help, or keep an eye on the shared notes (Section 9.7), and either answer questions or remind the teacher to do so during breaks.

Helpers are sometimes people training to become teachers (i.e. they’re Teacher B in the teach and assist model), but they can also be members of the host institution’s technical support staff, alumni of the class, or advanced learners who already know the material well. Using the latter as helpers is doubly effective: not only are they more likely to understand the problems their peers are having, it also stops them from getting bored. This helps the whole class stay engaged because boredom is infectious: if a handful of people start checking out, the people around them will follow suit.

If you and a partner are co-teaching:

  • Take 2–3 minutes before the start of each class to confirm who’s teaching what. If you have time, try drawing or reviewing a concept map together.

  • Use that time to work out a couple of hand signals as well. “You’re going too fast,” “speak up,” “that learner needs help,” and, “It’s time for a bathroom break” are all useful.

  • Each person should teach for at least 10–15 minutes at a stretch, since learners will be distracted by more frequent switch-ups.

  • The person who isn’t teaching shouldn’t interrupt, offer corrections or elaborations, or do anything else to distract from what the person teaching is doing or saying. The one exception is to ask leading questions if the learners seem lethargic or unsure of themselves.

  • Each person should take a couple of minutes before they start teaching to see what their partner is going to teach after they’re done, and then not present any of that material.

  • The person who isn’t teaching should stay engaged with the class, not catch up on their email. Monitoring the shared notes (Section 9.7), keeping an eye on the learners to see who’s struggling, jotting down some feedback to give your teaching partner at the next break—anything that contributes to the lesson is better than anything that doesn’t.

Most importantly, take a few minutes when the class is over to congratulate or commiserate with each other: in teaching as in life, shared misery is lessened and shared joy increased.

Assess Prior Knowledge

The more you know about your learners before you start teaching, the more you will be able to help them. Inside a formal school system, the prerequisites to your course will tell you something about what they’re likely to already know. In a free-range setting, though, your learners may be much more diverse, so you may want to give them a short survey or questionnaire in advance of your class to find out what knowledge and skills they have.

Asking people to rate themselves on a scale from 1 to 5 is pointless because the less people know about a subject, the less accurately they can estimate their knowledge (Figure [f:classroom-dunning-kruger], from https://theness.com/neurologicablog/index.php/misunderstanding-dunning-kruger/), a phenomenon called the Dunning-Kruger effect [Krug1999]. Conversely, people who are members of underrepresented groups will often underrate their skills.

The Dunning-Kruger Effect
The Dunning-Kruger Effect

Rather than asking people to self-assess, you can ask them how easily they could complete some specific tasks. Doing this is risky, though, because school trains people to treat anything that looks like an exam as something they have to pass rather than as a chance to shape instruction. If someone answers “I don’t know” to even a couple of questions on your pre-assessment, they might conclude that your class is too advanced for them. You could therefore scare off many of the people you most want to help.

Section 21.5 presents a short pre-assessment questionnaire that most potential learners are unlikely to find intimidating. If you use it or anything like it, try to follow up with people who don’t respond to find out why not and compare your evaluation of learners with their self-assessment to improve your questions.

Plan for Mixed Abilities

If your learners have widely varying levels of prior knowledge, you can easily wind up in a situation where a third of your class is lost and a third is bored. That’s unsatisfying for everyone, but there are some strategies you can use to manage the situation:

  • Before running a workshop, communicate its level clearly to everyone by showing a few examples of exercises that they will be asked to complete. This helps potential participants gauge the level of the class far more effectively than a point-form list of topics.

  • Provide extra self-paced exercises so that more advanced learners don’t finish early and get bored.

  • Keep an eye out for learners who are falling behind and intervene early so that they don’t become frustrated and give up.

  • Ask more advanced learners to help people next to them (see Section 9.6 below).

One other way to accommodate mixed abilities is to have everyone work through material on their own at their own pace as they would in an online course, but to do it simultaneously and side by side with helpers roaming the room to get people unstuck. Some people will go three or four times further than others when workshops are run like this, but everyone will have had a rewarding and challenging day.

False Beginners

A false beginner is someone who has studied a language before but is learning it again. They may be indistinguishable from absolute beginners on pre-assessment tests, but they are able to move much more quickly once the class starts because they are re-learning rather than learning for the first time.

Being a false beginner is often a sign of preparatory privilege [Marg2010], and false beginners are common in free-range programming classes. For example, a child whose family is affluent enough to have sent them to a robotics summer camp may do poorly on a pre-test of programming knowledge because the material isn’t fresh in their mind, but still has an advantage over a child from a less fortunate background. The strategies described above can help level the playing field in cases like this, but again, the real solution is to use your own privilege to address larger out-of-class factors [Part2011].

The most important thing is to accept that you can’t help everyone all of the time. If you slow down to accommodate two people who are struggling, you are failing the other eighteen. Equally, if you spend a few minutes talking about an advanced topic to a learner who is bored, the rest of the class will feel left out.

Pair Programming

Pair programming is a software development practice in which two programmers work together on one computer. One person (the driver) does the typing while the other (the navigator) offers comments and suggestions, and the two switch roles several times per hour.

Pair programming is an effective practice in professional work [Hann2009] and is also a good way to teach: benefits include increased success rate in introductory courses, better software, and higher learner confidence in their solutions. There is also evidence that learners from underrepresented groups benefit even more than others [McDo2006,Hank2011,Port2013,Cele2018]. Partners can help each other out during practical exercises, clarify each other’s misconceptions when the solution is presented, and discuss common interests during breaks. I have found it particularly helpful with mixed-ability classes, since pairs are more homogeneous than individuals.

When you use pairing, put everyone in pairs, not just learners who are struggling, so that no one feels singled out. It’s also useful to have people sit in new places (and hence pair with different partners) on a regular basis, and to have people switch roles within each pair three or four times per hour so that the stronger personality in each pair doesn’t dominate the session.

If your learners are new to pair programming, take a few minutes to demonstrate what it actually looks like so that they understand that the person who doesn’t have their hands on the keyboard isn’t supposed to just sit and watch. Finally, tell them that people who focus on trying to complete the task as quickly as possible are less fair in their sharing [Lewi2015].

Switching Partners

Teachers have mixed opinions on whether people should be required to change partners at regular intervals. On the one hand it gives everyone a chance to gain new insights and make new friends. On the other, moving computers and power adapters to new desks several times a day is disruptive, and pairing can be uncomfortable for introverts. That said,[Hann2010] found weak correlation between the “Big Five” personality traits and performance in pair programming, although an earlier study [Wall2009] found that pairs whose members had differing levels of personality traits communicated more often.

Take NotesTogether?

Note-taking is a form of real-time elaboration (Section 5.1): it forces you to organize and reflect on material as it’s coming in, which in turn increases the likelihood that you will transfer it to long-term memory. Many studies have shown that taking notes while learning improves retention [Aike1975,Boha2011]. While it has not yet been widely studied [Ornd2015,Yang2015], I have found that having learners take notes together in a shared online page is also effective:

  • It allows people to compare what they think they’re hearing with what other people are hearing, which helps them fill in gaps and correct misconceptions right away.

  • It gives the more advanced learners in the class something useful to do. Rather than getting bored and checking Instagram during class, they can take the lead in recording what’s being said, which keeps them engaged and allows less advanced learners to focus more of their attention on new material.

  • The notes the learners take are usually more helpful to them than those the teacher would prepare in advance, since the learners are more likely to write down what they actually found new rather than what the teacher predicted would be new.

  • Glancing at recent notes while learners are working on an exercise helps the teacher discover that the class missed or misunderstood something.

Is the Pen Mightier than the Keyboard?

[Muel2014] reported that taking notes on a computer is generally less effective than taking notes using pen and paper. While their result was widely shared,[More2019] was unable to replicate it.

If learners are taking notes together, you can also have them paste in short snippets of code and point-form or sentence-length answers to formative assessment questions. To prevent everyone from trying to edit the same couple of lines at the same time, make a list of everyone’s name and paste it into the document whenever you want each person to answer a question.

Learners often find that taking notes together is distracting the first time they try it because they have to split their attention between what the teacher is saying and what their peers are writing (Section 4.1). If you are only working with a particular group once, you should therefore heed the advice in Section 9.12 and have them take notes individually.

Points for Improvement

One way to demonstrate to learners that they are learning with you, not just from you, is to allow them to take notes by editing (a copy of) your lesson. Instead of posting PDFs for them to download, create editable copies of your slides, notes, and exercises in a wiki, a Google Doc, or anything else that allows you to review and comment on changes. Giving people credit for fixing mistakes, clarifying explanations, adding new examples, and writing new exercises doesn’t reduce your workload, but increases engagement and the lesson’s lifetime (Section 6.3).

Sticky Notes

Sticky notes are one of my favorite teaching tools, and I’m not alone in loving their versatility, portability, stickability, foldability, and subtle yet alluring aroma [Ward2015].

As Status Flags

Give each learner two sticky notes of different colors, such as orange and green. These can be held up for voting, but their real use is as status flags. If someone has completed an exercise and wants it checked, they put the green sticky note on their laptop; if they run into a problem and need help, they put up the orange one. This works much better than having people raise their hands: it’s more discreet (which means they’re more likely to actually do it), they can keep working while their flag is raised rather than trying to type one-handed, and the teacher can quickly see from the front of the room what state the class is in. Status flags are particularly helpful when people in mixed-ability classes are working through material at their own speed (Section 9.5).

Once your learners are comfortable with two stickies, give them a third that they can put up when their brains are full or they need a bathroom break220. Again, adults are more likely to post a sticky than to raise their hand, and once one blue sticky note goes up, a flurry of others usually follows.

To Distribute Attention

Sticky notes can also be used to ensure the teacher’s attention is fairly distributed. Have each learner write their name on a sticky note and put it on their laptop. Each time the teacher calls on them or answers one of their questions, they take their sticky note down. Once all the sticky notes are down, everyone puts theirs up again.

This technique makes it easy for the teacher to see who they haven’t spoken with recently, which in turn helps them avoid unconscious bias and interacting preferentially with their most extroverted learners. Without a check like this, it’s all too easy to create a feedback loop in which extroverts get more attention, which leads to them doing better, which in turn leads to them getting more attention, while quieter, less confident, or marginalized learners are left behind [Alvi1999,Juss2005].

It also shows learners that attention is being distributed fairly so that when they are called on, they won’t feel like they’re being picked on. When I am working with a new group, I allow people to take down their own sticky notes during the first hour or two of class if they would rather not be called on. If they keep doing this as time goes on, I try to have a quiet conversation with them to find out why and to see if there’s anything I can do to make them more comfortable.

As Minute Cards

You can also use sticky notes as minute cards. Before each break, learners take a minute to write one thing on the green sticky note that they think will be useful and one thing on the orange note that they found too fast, too slow, confusing, or irrelevant. While they are enjoying their coffee or lunch, review their notes and look for patterns. It takes less than five minutes to see what learners in a 40-person class are enjoying, what they are confused by, what problems they’re having, and what questions you have not yet answered.

Learners should not sign their minute cards: they are meant as anonymous feedback. The one-up/one-down technique described in Section 9.11 is a chance for collective, attributable feedback.

Never a Blank Page

Programming workshops and other kinds of classes can be built around a set of independent exercises, develop a single extended example in stages, or use a mixed strategy. The two main advantages of independent exercises are that people who fall behind can easily re-synchronize and that lesson developers can add, remove, and rearrange material at will (Section 6.3). A single extended example, on the other hand, will show learners how the bits and pieces they’re learning fit together: in educational parlance, it provides more opportunity for them to integrate their knowledge.

Whichever approach you take, novices should never start doing exercises with a blank page or screen, since they often find this intimidating or bewildering. If they have been following along as you do live coding, ask them to add a few more lines or to modify the example you have built up. Alternatively, if they are taking notes together, paste a few lines of starter code into the document for them to extend or modify.

Modifying existing code instead of writing new code from scratch doesn’t just give learners structure: it is also closer to what they will do in real life. Keep in mind, though, that learners may be distracted by trying to understand all of the starter code rather than doing their own work. Java’s public static void main() or a handful of import statements at the top of a Python program may make sense to you, but is extraneous load to them (Chapter 4).

Setting Up Your Learners

Free-range learners often want bring their own computers and to leave the class with those machines set up to do real work. Free-range teachers should therefore prepare to teach on both Windows and MacOS222, even though it would be simpler to require learners to use just one.

Common Denominators

If your participants are using different operating systems, try to avoid using features which are specific to just one and point out any that you do use. For example, the “minimize window” controls and behavior on Windows are different from those on MacOS.

No matter how many platforms you have to deal with, put detailed setup instructions on your course website and email learners a couple of days before the workshop starts to remind them to do the setup. A few people will still show up without the required software because they ran into problems, couldn’t find time to complete all the steps, or are simply the sort of person who never follows instructions in advance. To detect this, have everyone run some simple command as soon as they arrive and show the teachers the result, then get helpers and other learners to assist people who have run into trouble.

Virtual Machines

Some people use tools like Docker to put virtual machines on learners’ computers so that everyone is working with exactly the same tools, but this introduces a new set of problems. Older or smaller machines simply aren’t fast enough to run them, learners struggle to switch back and forth between two different sets of keyboard shortcuts for things like copying and pasting, and even competent practitioners will become confused about what exactly is happening where.

Setting up is so complicated that many teachers prefer to have learners use browser-based tools instead. However, this makes the class dependent on institutional WiFi (which can be of highly variable quality) and doesn’t satisfy learners’ desire to leave with their own machines ready for real-world use. As cloud-based tools like Glitch and RStudio Cloud become more robust, though, the latter consideration is becoming less important.

One last way to tackle setup issues is to split the class over several days, and to have people install what’s required for each day before leaving class on the day before. Dividing the work into chunks makes each one less intimidating, learners are more likely to actually do it, and it ensures that you can start on time for every lesson except the first.

Other Teaching Practices

None of the smaller practices described below are essential, but all will improve lesson delivery. As with chess and marriage, success in teaching is often a matter of slow, steady progress.

Start With Introductions

Begin your class by introducing yourself. If you’re an expert, tell them a bit about how you got to where you are; if you’re only two steps ahead of them, emphasize what you and they have in common. Whatever you say, your goals are to make yourself more approachable and to encourage their belief that they can succeed.

Learners should also introduce themselves to each other. In a class of a dozen, they can do this verbally; in a larger class or if they are strangers to one another, I find it’s better to have them each write a line or two about themselves in the shared notes (Section 9.7).

Set Up Your Own Environment

Setting up your environment is just as important as setting up your learners’, but more involved. As well as having network access and all the software you’re going to use, you should also have a glass of water or a cup of tea or coffee. This helps keep your throat lubricated, but its real purpose is to give you an excuse to pause and think for a couple of seconds when someone asks a hard question or when you lose track of what you were going to say next. You will probably also want some whiteboard pens and a few of the other things described in Section 21.3.

One way to keep your day-to-day work from getting in the way of your teaching is to create a separate account on your computer for the latter. Use system defaults for everything in this second account, along with a larger font and a blank screen background, and turn off notifications so that your teaching isn’t interrupted by pop-ups.

Avoid Homework in All-Day Formats

Learners who have spent an entire day programming will be tired. If you give them homework to do after hours, they’ll start the next day tired as well, so don’t.

Don’t Touch the Learner’s Keyboard

It’s often tempting to fix things for learners, but even if you narrate every step, it’s likely to demotivate them by emphasizing the gap between their knowledge and yours. Instead, keep your hands off the keyboard and talk your learners through whatever they need to do: it will take longer, but it’s more likely to stick.

Repeat the Question

Whenever someone asks a question in class, repeat it back to them before answering to check that you’ve understood it and to give people who might not have heard it a chance to do so. This is particularly important when presentations are being recorded or broadcast, since your microphone will usually not pick up what other people are saying. Repeating questions back also gives you a chance to redirect the question to something you’re more comfortable answering

One Up, One Down

An adjunct to minute cards is to ask for summary feedback at the end of each day. Learners alternately give either one positive or one negative point about the day without repeating anything that has already been said. The ban on repeats forces people to say things they otherwise might not: once all the “safe” feedback has been given, participants will start saying what they really think.

Different Modes, Different Answers

Minute cards (Section 9.8) are anonymous; the alternating up-and-down feedback is not. You should use the two together because anonymity allows both honesty and trolling.

Have Learners Make Predictions

Research has shown that people learn more from demonstrations if they are asked to predict what’s going to happen [Mill2013]. Doing this fits naturally into live coding: after adding or changing a few lines of a program, ask the class what is going to happen when it runs. If the example is even moderately complex, prediction can serve as a motivating question for a round of peer instruction.

Setting Up Tables

You may not have any control over the layout of the desks or tables in the room in which you teach, but if you do, we find it’s best to have flat (dinner-style) seating rather than banked (theater-style) seating so that you can reach learners who need help more easily and so that it’s easier for learners to pair with one another (Section 9.5). In-floor power outlets so that you don’t have to run power cords across the floor make life easier as well as safer, but are still uncommon.

Whatever layout you have, try to make sure that every seat has an unobstructed view of the screen. Good back support is important too, since people are going to be in them for an extended period. Like in-floor power outlets, good classroom seating is still unfortunately uncommon.

Cough Drops

If you talk all day to a room full of people, your throat gets raw because you are irritating the epithelial cells in your larynx and pharynx. This doesn’t just make you hoarse—it also makes you more vulnerable to infection (which is part of the reason people often come down with colds after teaching).

The best way to protect yourself against this is to keep your throat lined, and the best way to do that is to use cough drops early and often. Good ones will also mask the onset of coffee breath, for which your learners will probably be grateful.

Think-Pair-Share

Think-pair-share is a lightweight technique that helps people improve ideas through discussion with their peers. Each person starts by thinking individually about a question or problem and jotting down a few notes. They then explain their ideas to each another in pairs, merging them or selecting the most promising. Finally, a few pairs present their ideas to the whole group.

Think-pair-share works because it forces people to externalize their cognition (Section 3.1). It also gives them a chance to spot and resolve gaps or contradictions in their ideas before exposing them to a larger group, which can make less extroverted learners a little less nervous about appearing foolish.

Morning, Noon, and Night

[Smar2018] found that learners do less well if their classes and other work are scheduled at times that don’t line up with their natural body clocks, i.e. that if a morning person takes night classes or vice versa, their grades suffer. It’s usually not possible to accommodate this in small groups, but larger ones should try to stagger start times for parallel sessions. This can also help people juggling childcare responsibilities and other constraints, and reduce the length of lineups at coffee breaks and for washrooms.

Humor

Humor should be used sparingly when teaching: most jokes are less funny when written down and become even less funny with each re-reading. Being spontaneously funny while teaching usually works better but can easily go wrong: what’s a joke to your circle of friends may turn out to be a serious political issue to your audience. If you do make jokes when teaching, don’t make them at the expense of any group, or of any individual except possibly yourself.

Limit Innovation

Each of the techniques presented in this chapter will make your classes better, but you shouldn’t try to adopt them all at once. The reason is that every new practice increases your cognitive load as well as your learners’, since you are all now trying to learn a new way to learn as well as the lesson’s subject matter. If you are working with a group repeatedly, you can introduce one new technique every few lessons; if you only have them for a one-day workshop, it’s best to pick just one method they haven’t seen before and get them comfortable with that.

Exercises

Create a Questionnaire (individual/20)

Using the questionnaire in Section 21.5 as a template, create a short questionnaire you could give learners before teaching a class of your own. What do you most want to know about their background, and how can both parties be sure they agree on what level of understanding you’re asking about?

One of Your Own (whole class/15)

Think of one teaching practice that hasn’t been described so far. Present your idea to a partner, listen to theirs, and select one to present to the group as a whole. (This exercise is an example of think-pair-share.)

May I Drive? (pairs/10)

Swap computers with a partner (preferably one who uses a different operating system than you) and work through a simple programming exercise. How frustrating is it? How much insight does it give you into what novices have to go through all the time?

Pairing (pairs/15)

Watch this video of pair programming and then practice doing it with a partner. Remember to switch roles between driver and navigator every few minutes. How long does it take you to fall into a working rhythm?

Compare Notes (small groups/15)

Form groups of 3–4 people and compare the notes you have taken on this chapter. What did you think was noteworthy that your peers missed and vice versa? What did you understand differently?

Credibility (individual/15)

[Fink2013] describes three things that make teachers credible in their learners’ eyes:

Competence:

knowledge of the subject as shown by the ability to explain complex ideas or reference the work of others.

Trustworthiness:

having the learners’ best interests in mind. This can be shown by giving individualized feedback, offering a rational explanation for grading decisions, and treating all learners the same.

Dynamism:

excitement about the subject (Chapter 8).

Describe one thing you do when teaching that fits into each category, and then describe one thing you don’t do but should.

Measuring Effectiveness (individual/15)

[Kirk1994] defines four levels at which to evaluate training:

Reaction:

how did the learners feel about the training?

Learning:

how much did they actually learn?

Behavior:

how much have they changed their behavior as a result?

Results:

how have those changes in behavior affected their output or the output of their group?

What are you doing at each level to evaluate what and how you teach? What could you do that you’re not doing?

Objections and Counter-Objections (think-pair-share/15)

You have decided not to ask your learners if your class was useful because you know there is no correlation between their answers and how much they actually learn (Section 7.1). Instead, you have put forward four proposals, each of which your colleagues have shot down:

See if they recommend the class to friends.

Why would this be any more meaningful than asking them how they feel about the class?

Give them an exam at the end.

But how much learners know at the end of the day is a poor predictor of how much they will remember two or three months later, and any kind of final exam will make the class a lot more stressful.

Give them an exam two or three months later.

That’s practically impossible with free-range learners, and the people who didn’t get anything out of the workshop are probably less likely to take part in follow-up, so feedback gathered this way will be skewed.

See if they keep using what they learned.

Installing spyware on learners’ computers is frowned upon, so how will this be implemented?

Working on your own, come up with answers to these objections, then share your responses with a partner and discuss the approaches you have come up with. When you are done, share your favored approach with the class.

Motivation and Demotivation

Learners need encouragement to step out into unfamiliar terrain, so this chapter discusses ways teachers can motivate them. More importantly, it talks about how teachers can demotivate them and how to avoid doing that.

Our starting point is the difference between extrinsic motivation, which we feel when we do something to avoid punishment or earn a reward, and intrinsic motivation, which is what we feel when we find something personally fulfilling. Both affect most situations—for example, people teach because they enjoy it and because they get paid—but we learn best when we are intrinsically motivated [Wlod2017]. According to self-determination theory, the three drivers of intrinsic motivation are:

Competence:

the feeling that you know what you’re doing.

Autonomy:

the feeling of being in control of your own destiny.

Relatedness:

the feeling of being connected to others.

A well-designed lesson encourages all three. For example, a programming exercise can let learners practice the tools they need to use to solve a larger problem (competence), let them tackle the parts of that problem in whatever order they want (autonomy), and allow them to talk to their peers (relatedness).

The Problem of Grades

I’ve never had an audience in my life. My audience is a rubric.
– quoted by Matt Tierney

Grades and the way they distort learning are often used as an example of extrinsic motivation, but as [Mill2016a] observes, they aren’t going to go away any time soon, so it’s pointless to try to build a system that ignores them. Instead, [Lang2013] explores how courses that emphasize grades can incentivize learners to cheat and offers some tips on how to diminish this effect, while [Covi2017] looks at the larger problem of balancing intrinsic and extrinsic motivation in institutional education, and the constructive alignment approach advocated in [Bigg2011] seeks to bring learning activities and learning outcomes into line with each other.

[Ambr2010] contains a list of evidence-based methods to motivate learners. None of them are surprising—it’s hard to imagine someone saying that we shouldn’t identify and reward what we value—but it’s useful to check lessons to make sure they are doing at least a few of these things. One strategy I particularly like is to have learners who struggled but succeeded come in and tell their stories to the rest of the class. Learners are far more likely to believe stories from people like themselves [Mill2016a], and people who have been through your course will always have advice you would never have thought of.

Not Just for Learners

Discussions of motivation in education often overlook the need to motivate the teacher. Learners respond to a teacher’s enthusiasm, and teachers (particularly volunteers) need to care about a topic in order to keep teaching it. This is another powerful reason to co-teach (Section 9.3): just as having a running partner makes it more likely that you’ll keep running, having a teaching partner helps get you up and going on those days when you have a cold and the projector bulb has burned out and nobody knows where to find a replacement and seriously, are they doing construction again?

Teachers can do other positive things as well.[Bark2014] found three things that drove retention for all learners: meaningful assignments, faculty interaction with learners, and learner collaboration on assignments. Pace and workload relative to expectations were also significant drivers, but primarily for male learners. Things that didn’t drive retention were interactions with teaching assistants and interactions with peers in extracurricular activities. These results seem obvious, but the reverse would seem obvious too: if the study had found that extracurricular activities did drive retention, we would also think that made sense. Noticeably, two of the four retention drivers (faculty interaction and learner collaboration) take extra effort to replicate online (Chapter 11).

Authentic Tasks

As Dylan Wiliam points out in [Hend2017], motivation doesn’t always lead to achievement, but achievement almost always leads to motivation: learners’ success motivates them far more than being told how wonderful they are. We can use this idea in teaching by creating a grid whose axes are “mean time to master” and “usefulness once mastered” (Figure [f:motivation-what]).

What to teach

Things that are quick to master and immediately useful should be taught first, even if they aren’t considered fundamental by people who are already competent practitioners, because a few early wins will build learners’ confidence in themselves and their teacher. Conversely, things that are hard to learn and aren’t useful to your learners at their current stage of development should be skipped entirely, while topics along the diagonal need to be weighed against each other.

Useful to Whom?

If someone wants to build websites, foundational computer science concepts like recursion and computability may inhabit the lower right corner of this grid. That doesn’t mean they aren’t worth learning, but if our aim is to motivate people, they can and should be deferred. Conversely, a senior who is taking a programming class to stimulate their mind may prefer exploring these big ideas to doing anything practical. When you are making up your grid, you should do it with your learner personas in mind (Section 6.1). If topics wind up in very different places for different personas, you should think about creating different courses.

A well-studied instance of prioritizing what’s useful without sacrificing what’s fundamental is the media computation approach developed at Georgia Tech [Guzd2013]. Instead of printing “hello world” or summing the first ten integers, a learner’s first program might open an image, resize it to create a thumbnail, and save the result. This is an authentic task, i.e. something that learners believe they would actually do in real life. It also has a tangible artifact: if the image comes out the wrong size, learners have something in hand that can guide their debugging.[Lee2013] describes an adaption of this approach from Python to MATLAB, while others are building similar courses around data science, image processing, and biology [Dahl2018,Meys2018,Ritz2018].

There will always be tension between giving learners authentic problems and exercising the individual skills they need to solve those problems: after all, programmers don’t answer multiple choice questions on the job any more than musicians play scales over and over in front of an audience. Finding the balance is hard, but a first step is to take out anything arbitrary or meaningless. For example, programming examples shouldn’t use variables called foo and bar, and if you’re going to have learners sort a list, make it a list of songs rather than strings like “aaa” and “bbb”.

Demotivation

Women aren’t leaving computing because they don’t know what it’s like; they’re leaving because they do know.
— variously attributed

If you are teaching in a free-range setting, your learners are probably volunteers, and probably want to be in your classroom. Motivating them is therefore less of a concern than not demotivating them. Unfortunately, you can easily demotivate people by accident. For example,[Cher2009] reported four studies showing that subtle environmental clues have a measurable difference on the interest that people of different genders have in computing: changing objects in a Computer Science classroom from those considered stereotypical of computer science (e.g. Star Trek posters and video games) to objects not considered stereotypical (e.g. nature posters and phone books) boosted female undergraduates’ interest to the level of their male peers. Similarly,[Gauc2011] reports a trio of studies showing that gendered wording commonly employed in job recruitment materials can maintain gender inequality in traditionally male-dominated occupations.

There are three main demotivators for adult learners:

Unpredictability

demotivates people because if there’s no reliable connection between what they do and what outcome they achieve, there’s no reason for them to try to do anything.

Indifference

demotivates because learners who believe that the teacher or educational system doesn’t care about them or the material won’t care about it either.

Unfairness

demotivates people who are disadvantaged for obvious reasons. What’s surprising is that it also demotivates people who benefit from unfairness: consciously or unconsciously, they worry that they will some day find themselves in the disadvantaged group [Wilk2011].

In extreme situations, learners may develop learned helplessness: when repeatedly subjected to negative feedback in a situation that they can’t change, they may learn not to even try to change the things they could.

One of the fastest and surest ways to demotivate learners is to use language that suggests that some people are natural programmers and others aren’t. Guzdial has called this the biggest myth about teaching computer science, and[Pati2016] backed this up by showing that people see evidence for a “geek gene” where none exists. They analyzed grade distributions from 778 university courses and found that only 5.8% showed signs of being multimodal, i.e. only one class in twenty showed signs of having two distinct populations of learners. They then showed 53 Computer Science professors histograms of ambiguous grade distributions; those who believed that some people are innately predisposed to be better at Computer Science were more likely to see them as bimodal than those who didn’t.

These beliefs matter because teachers act on them [Brop1983]. If a teacher believes that a learner is likely to do well they naturally (often unconsciously) focus on that learner, who then fulfills the teacher’s expectations because of the increased attention, which in turn appears to confirm the teacher’s belief. Sadly, there is little sign that mere evidence of the kind presented in[Pati2016] is enough to break this vicious cycle

Here are a few other specific things that will demotivate your learners:

A holier-than-thou or contemptuous attitude

from a teacher or a fellow learner.

Telling them that their existing skills are rubbish.

Unix users sneer at Windows, programmers of all kinds make jokes about Excel, and no matter what web application framework you already know, some programmer will tell you that it’s out of date. Learners have often invested a lot of time and effort into acquiring the skills they have; disparaging them is a good way to guarantee that they won’t listen to anything else you have to say.

Diving into complex or detailed technical discussion

with the most advanced learners in the class.

Pretending that you know more than you do.

Learners will trust you more if you are frank about the limitations of your knowledge, and will be more likely to ask questions and seek help.

Using the J word (“just”) or feigning surprise.

As discussed in Chapter 3, saying things like “I can’t believe you don’t know X” or “you’ve never heard of Y?” signals to the learner that the teacher thinks their problem is trivial and that they must be stupid for not being able to figure it out.

Software installation headaches.

People’s first contact with programming or with new programming tools is often demoralizing, and believing that something is hard to learn is a self-fulfilling prophecy. It isn’t just the time it takes to get set up or the feeling that it’s unfair to have to debug something that depends on precisely the knowledge they don’t yet have. The real problem is that every such failure reinforces their belief that they would have a better chance of making next Thursday’s deadline if they kept doing things the way they always have.

It is even easier to demotivate people online than in person, but there are now evidence-based strategies for dealing with this.[Ford2016] found that five barriers to contribution on Stack Overflow are seen as significantly more problematic by women than men: lack of awareness of site features, feeling unqualified to answer questions, intimidating community size, discomfort interacting with or relying on strangers, and the feeling that searching for things online wasn’t “real work.” Fear of negative feedback didn’t quite make this list, but would have been the next one added if the authors weren’t quite so strict about their statistical cutoffs. All of these factors can and should be addressed in both in-person and online settings using methods like those in Section 10.4, and doing so improves outcomes for everyone [Sved2016].

Productive Failure and Privilege

Some recent work has explored productive failure, where learners are deliberately given problems that can’t be solved with the knowledge they have and have to go out and acquire new information in order to make progress [Kapu2016]. Productive failure is superficially reminiscent of tech’s “fail fast, fail often” mantra, but the latter is more a sign of privilege than of understanding. People can only afford to celebrate failure if they’re sure they’ll get a chance to try again; many of your learners, and many people from marginalized or underprivileged groups, can’t be sure of that, and assuming that failure is an option is a great way to demotivate them.

Impostor Syndrome

Impostor syndrome is the belief that your achievements are lucky flukes and an accompanying fear that someone will finally figure this out. It is common among high achievers who undertake publicly visible work, but disproportionately affects members of underrepresented groups: as discussed in Section 7.1,[Wilc2018] found that female learners with prior exposure to computing outperformed their male peers in all areas in introductory programming courses but were consistently less confident in their abilities, in part because society keeps signaling in subtle and not-so-subtle ways that they don’t really belong.

Traditional classrooms can fuel impostor syndrome. Schoolwork is frequently undertaken alone or in small groups, but the results are shared and criticized publicly. As a result, we rarely see how others struggle to finish their work, which can feed the belief that everyone else finds this easy. Members of underrepresented groups who already feel additional pressure to prove themselves may be particularly affected.

The Ada Initiative has created some guidelines for fighting your own impostor syndrome, which include:

Talk about the issue with people you trust.

When you hear from others that impostor syndrome is a common problem, it becomes harder to believe your feelings of being a fraud are real.

Go to an in-person impostor syndrome session.

There’s nothing like being in a room full of people you respect and discovering that 90% of them have impostor syndrome.

Watch your words, because they influence how you think.

Saying things like, “I’m not an expert in this, but” detracts from the knowledge you actually possess.

Teach others about your field.

You will gain confidence in your own knowledge and skill and help others avoid some impostor syndrome shoals.

Ask questions.

Asking questions can be intimidating if you think you should know the answer, but getting answers eliminates the extended agony of uncertainty and fear of failure.

Build alliances.

Reassure and build up your friends, who will reassure and build you up in return. (If they don’t, you might want to think about finding new friends)

Own your accomplishments.

Keep actively recording and reviewing what you have done, what you have built, and what successes you’ve had.

As a teacher, you can help people with their impostor syndrome by sharing stories of mistakes that you have made or things you struggled to learn. This reassures the class that it’s OK to find topics hard. Being open with the group also builds trust and gives them confidence to ask questions. (Live coding is great for this: as noted in Section 8.1, your typos show your class that you’re human.) Frequent formative assessments help as well, particularly if learners see you adjusting what you teach or how quickly you go based on their outcomes.

Mindset and Stereotype Threat

Carol Dweck and others have studied the differences of fixed mindset and growth mindset on learning outcomes. If people believe that competence in some area is intrinsic (i.e. that you either “have the gene” for it or you don’t), everyone does worse, including the supposedly advantaged. The reason is that if someone doesn’t do well at first, they assume that they lack that aptitude, which biases their future performance. On the other hand, if people believe that a skill is learned and can be improved, they do better on average.

There are concerns that growth mindset has been oversold, or that it is much more difficult to translate research about it into practice than its more enthusiastic advocates have implied [Sisk2018]. However, it does appear that learners with low socioeconomic status or who are academically at risk might benefit from mindset interventions.

Another widely discussed effect is stereotype threat [Stee2011]. Reminding people of negative stereotypes, even in subtle ways, can make them anxious about the risk of confirming those stereotypes, which in turn can reduce their performance. Again, there are some concerns about the replicability of key studies, and the issue is further clouded by the fact that the term has been used in many ways [Shap2007], but no one would argue that mentioning stereotypes in class will help learners.

Accessibility

Putting lessons and exercises out of someone’s reach is about as demotivating as it gets, and it’s very easy to do this inadvertently. For example, the first online programming lessons I wrote had a transcript of the narration beside the slides, but didn’t include the actual source code: that was in screenshots of PowerPoint slides. Someone using a screen reader could therefore hear what was being said about the program, but wouldn’t know what the program actually was. It isn’t always feasible to accommodate every learner’s needs, but adding description captions to images and making navigation controls accessible to people who can’t use a mouse can make a big difference.

Curb Cuts

Making material accessible helps everyone, not just people facing challenges. Curb cuts—the small sloped ramps joining a sidewalk to the street—were originally created to make it easier for the physically disabled to move around, but proved to be equally helpful to people with strollers and grocery carts. Similarly, captioning images doesn’t just help the visually impaired: it also makes images easier for search engines to find and index.

The first and most important step in making lessons accessible is to involve people with disabilities in decision making: the slogan nihil de nobis, sine nobis (literally, “nothing for us without us”) predates accessibility rights, but is always the right place to start. A few specific recommendations are:

Find out what you need to do.

Each of these posters offers do’s and don’ts for people on the autistic spectrum, users of screen readers, and people with low vision, physical or motor disabilities, hearing exercises, and dyslexia.

Don’t do everything at once.

The enhancements described in the previous point can seem pretty daunting, so make one change at a time.

Do the easy things first.

Font size, using a clip-on microphone so that people can hear you more easily, and checking your color choices are good places to start.

Know how well you’re doing.

Sites like WebAIM allow you to check how accessible your online materials are to visually impaired users.

[Coom2012,Burg2015] are good guides to visual design for accessibility. Their recommendations include:

Format documents with actual headings and other landmarks

rather than just changing font sizes and styles.

Avoid using color alone to convey meaning in text or graphics.

Instead, use color plus different cross-hatching patterns (which also makes material understandable when printed in black and white).

Remove unnecessary elements

rather than just making them invisible, because screen readers will still often say them aloud.

Allow self-pacing and repetition

for people with reading or hearing issues.

Include narration of on-screen action in videos

(and talk while you type when live coding).

Spoons

In 2003, Christine Miserandino started using spoons as a way to explain what it’s like to live with chronic illness. Healthy people start each day with an unlimited supply of spoons, but people with lupus or other debilitating conditions only have a few, and everything they do costs them one. Getting out of bed? That’s a spoon. Making a meal? That’s another spoon, and pretty soon, you’ve run out.

You cannot simply just throw clothes on when you are sick If my hands hurt that day buttons are out of the question. If I have bruises that day, I need to wear long sleeves, and if I have a fever I need a sweater to stay warm and so on. If my hair is falling out I need to spend more time to look presentable, and then you need to factor in another 5 minutes for feeling badly that it took you 2 hours to do all this.

As Elizabeth Patitsas has argued, people who have a lot of spoons can accumulate more, but people whose supply is limited may struggle to get ahead. When designing classes and exercises, remember that some of your learners may have physical or mental obstacles that aren’t obvious. When in doubt, ask: they almost certainly have more experience with what works and what doesn’t than anyone else.

Inclusivity

Inclusivity is a policy of including people who might otherwise be excluded or marginalized. In computing, it means making a positive effort to be more welcoming to women, underrepresented racial or ethnic groups, people with various sexual orientations, the elderly, those facing physical challenges, the formerly incarcerated, the economically disadvantaged, and everyone else who doesn’t fit Silicon Valley’s affluent white/Asian male demographic. Figure [f:motivation-women-in-cs] (from NPR) graphically illustrates the effects of computing’s exclusionary culture on women.

Female computer science majors in the US
Female computer science majors in the US

[Lee2017] is a brief, practical guide to doing that with references to the research literature. The practices it describes help learners who belong to one or more marginalized or excluded groups, but help motivate everyone else as well. They are phrased in terms of term-long courses, but many can be applied in workshops and other free-range settings:

Ask learners to email you before the workshop

to explain how they believe the training could help them achieve their goals.

Review your notes

to make sure they are free from gendered pronouns, include culturally diverse names, etc.

Emphasize that what matters is the rate at which they are learning,

not the advantages or disadvantages they had when they started.

Encourage pair programming,

but demonstrate it first so that learners understand the roles of driver and navigator.

Actively mitigate behavior that some learners may find intimidating,

e.g. use of jargon or “questions” that are actually asked to display knowledge.

One way to support learners from marginalized groups is to have people sign up for workshops in groups rather than individually. That way, everyone in the room knows in advance that they will be with people they trust, which increases the chances of them actually coming. It also helps after the workshop: if people come with their friends or colleagues, they can work together to use what they’ve learned.

More fundamentally, lesson authors need to take everyone’s entire situation into account. For example,[DiSa2014a] found that 65% of male African-American participants in a game testing program went on to study computing, in part because the gaming aspect of the program was something their peers respected.[Lach2018] explored two general strategies for creating inclusive content and the risks associated with them:

Community representation

highlights learners’ social identities, histories, and community networks using after-school mentors or role models from learners’ neighborhoods, or activities that use community narratives and histories as a foundation for a computing project. The major risk with this approach is shallowness, e.g. using computers to build slideshows rather than do any real computing.

Computational integration

incorporates ideas from the learner’s community, such as reproducing indigenous graphic designs in a visual programming environment. The major risk here is cultural appropriation, e.g. using practices without acknowledging origins.

If in doubt, ask your learners and members of the community what they think you ought to do. We return to this in Chapter 13.

Conduct as Accessibility

We said in Section 9.1 that classes should enforce a Code of Conduct like the one in Appendix 17. This is a form of accessibility: while closed captions make video accessible to people with hearing disabilities, a Code of Conduct makes lessons accessible to people who would otherwise be marginalized.

Moving Past the Deficit Model

Depending on whose numbers you trust, only 12–18% of people getting computer science degrees are women, which is less than half the percentage seen in the mid-1980s (Figure [f:motivation-gender], from [Robe2017]). And western countries are the odd ones for having such low percentage of women in computing: women are still often 30–40% of computer science students elsewhere [Galp2002,Varm2015].

Degrees awarded and female enrollment
Degrees awarded and female enrollment

Since it’s unlikely that women have changed drastically in the last 30 years, we have to look for structural causes to understand what’s gone wrong and how to fix it. One explanation is the way that home computers were marketed as “boys’ toys” starting in the 1980s [Marg2003]; another is the way that computer science departments responded to explosive growth in enrollment in the 1980s and again in the 2000s by changing admission requirements [Robe2017]. None of these factors may seem dramatic to people who aren’t affected by them, but they act like the steady drip of water on a stone: over time, they erode motivation, and with it, participation.

The first and most important step toward fixing this is to stop thinking in terms of a “leaky pipeline” [Mill2015]. More generally, we need to move past a deficit model, i.e. to stop thinking that the members of underrepresented groups lack something and are therefore responsible for not getting ahead. Believing that puts the burden on people who already have to do extra work to overcome structural inequities and (not coincidentally) gives those who benefit from the current arrangements an excuse not to look at themselves too closely.

Rewriting History

[Abba2012] describes the careers and accomplishments of the women who shaped the early history of computing, but have all too often been written out of it;[Ensm2003,Ensm2012] describes how programming was turned from a female into a male profession in the 1960s, while [Hick2018] looks at how Britain lost its early dominance in computing by systematically discriminating against its most qualified workers: women. (See[Milt2018] for a review of all three books.) Discussing this history makes some men in computing very uncomfortable; in my opinion, that’s a good reason to do it.

Misogyny in video games, the use of “cultural fit” in hiring to excuse conscious or unconscious bias, a culture of silence around harassment, and the growing inequality in society that produces preparatory privilege (Section 9.5) are not any one person’s fault, but fixing them is everyone’s responsibility. As a teacher, you have more power than most; this workshop has excellent practical advice on how to be a good ally, and its advice is probably more important than anything this book teaches you about teaching.

Exercises

Authentic Tasks (pairs/15)

  1. In pairs, list half a dozen things you did this week that use the skills you teach.

  2. Place your items on a 2x2 grid of “time to master” and “usefulness”. Where do you agree and disagree?

Core Needs (whole class/10)

Paloma Medina identifies six core needs for people at work: belonging, improvement (i.e. making progress), choice, equality, predictability, and significance. After reading her description of these, order them from most to least significant for you personally, then compare rankings with your peers. How do you think your rankings compare with those of your learners?

Implement One Strategy for Inclusivity (individual/5)

Pick one activity or change in practice from [Lee2017] that you would like to work on. Put a reminder in your calendar three months in the future to ask yourself whether you have done something about it.

After the Fact (think-pair-share/20)

  1. Think back to a course that you took in the past and identify one thing the teacher did that demotivated you. Make notes about what could have been done afterward to correct the situation.

  2. Pair up with your neighbor and compare stories, then add your comments to a set of notes shared by the whole class.

  3. Review the comments in the shared notes as a group. Highlight and discuss a few of the things that could have been done differently.

  4. Do you think that doing this will help you handle situations like these in the future?

Walk the Route (whole class/15)

Find the nearest public transportation drop-off point to your building and walk from there to your office and then to the nearest washroom, making notes about things you think would be difficult for someone with mobility issues. Now borrow a wheelchair and repeat the journey. How complete was your list of exercises? And did you notice that the first sentence in this exercise assumed you could actually walk?

Who Decides? (whole class/15)

In [Litt2004], Kenneth Wesson wrote, “If poor inner-city children consistently outscored children from wealthy suburban homes on standardized tests, is anyone naive enough to believe that we would still insist on using these tests as indicators of success?” Read this article by Cameron Cottrill, and then describe an example from your own experience of “objective” assessments that reinforced the status quo.

Common Stereotypes (pairs/10)

Some people still say, “It’s so simple that even your grandmother could use it.” In pairs, list two or three other phrases that reinforce stereotypes about computing.

Not Being a Jerk (individual/15)

This short article by Gary Bernhardt rewrites an unnecessarily hostile message to be less rude. Using it as a model, find something unpleasant on Stack Overflow or some other public discussion forum and rewrite it to be more inclusive.

Saving Face (individual/10)

Would any of your hoped-for learners be embarrassed to admit that they don’t already know some of the things you want to teach? If so, how can you help them save face?

Childhood Toys (whole class/15)

[Cutt2017] surveyed adult computer users about their childhood activities and found that the strongest correlation between confidence and computer use were based on reading on one’s own and playing with construction toys like Lego that do not having moving parts. Survey the class and see what other activities people engaged in, then search for these activities online. How strongly gendered are descriptions and advertising for them? What effect do you think this has?

Lesson Accessibility (pairs/30)

In pairs, choose a lesson whose materials are available online and independently rank it according to the do’s and don’ts in these posters. Where did you and your partner agree? Where did you disagree? How well did the lesson do for each of the six categories of user?

Tracing the Cycle (small groups/15)

[Coco2018] traces a depressingly common pattern in which good intentions are undermined by an organization’s leadership being unwilling to actually change. Working in groups of 4–6, write brief texts or emails that you imagine each of the parties involved would send to the other at each stage in this cycle.

What’s the Worst Thing That Could Happen? (small groups/5)

Over the years, I have had a projector catch fire, a student go into labor, and a fight break out in class. I’ve fallen off stage twice, fallen asleep in one of my own lectures, and had many jokes fall flat. In small groups, make up a list of the worst things that have happened to you while you were teaching, then share with the class. Keep the list to remind yourself later that no matter how bad class was, at least none of that happened.

Review

Concepts: Motivation

Teaching Online

If you use robots to teach, you teach people to be robots.
— variously attributed

Technology has changed teaching and learning many times. Before blackboards were introduced into schools in the early 1800s, there was no way for teachers to share an improvised example, diagram, or exercise with an entire class at once. Cheap, reliable, easy to use, and flexible, blackboards enabled teachers to do things quickly and at a scale that they had only been able to do slowly and piecemeal before. Similarly, hand-held video cameras revolutionized athletics training, just as tape recorders revolutionized music instruction a decade earlier.

Many of the people pushing the internet into classrooms don’t know this history, and don’t realize that theirs is just the latest in a long series of attempts to use machines to teach [Watt2014]. From the printing press through radio and television to desktop computers and mobile devices, every new way to share knowledge has produced a wave of aggressive optimists who believe that education is broken and that technology can fix it. However, ed tech’s loudest advocates have often known less about “ed” than “tech,” and behind their rhetoric, many have been driven more by the prospect of profit than by the desire to empower learners.

Today’s debate is often muddied by confusing “online” with “automated.” Run well, a dozen people working through a problem in a video chat feels like any other small-group discussion. Conversely, a squad of teaching assistants grading hundreds of papers against an inflexible rubric might as well be a collection of Perl scripts. This chapter therefore starts by looking at fully automated online instruction using recorded videos and automatically graded exercises, then explores some alternative hybrid models.

MOOCs

The highest-profile effort to reinvent education using the internet is the Massive Open Online Course, or MOOC. The term was invented by David Cormier in 2008 to describe a course organized by George Siemens and Stephen Downes. That course was based on a connectivist view of learning, which holds that knowledge is distributed and that learning is the process of finding, creating, and pruning connections.

The term “MOOC” was quickly co-opted by creators of courses more closely resembled the hub-and-spoke model of a traditional classroom, with the teacher at the center defining goals and the learners seen as recipients or replicators of knowledge. Classes that use the original connectivist model are now sometimes referred to as “cMOOCs,” while classes that centralize control are called “xMOOCs.” (The latter is also sometimes called a “MESS,” for Massively Enhanced Sage on the Stage.)

Five years ago, you couldn’t walk across a major university campus without hearing someone talking about how MOOCs would revolutionize education, destroy it, or possibly both. MOOCs would give learners access to a wider range of courses and allow them to work when it was convenient for them rather than fitting their learning to someone else’s schedule.

But MOOCs haven’t been nearly as effective as their more enthusiastic proponents predicted [Ubel2017]. One reason is that recorded content is ineffective for many novices because it cannot clear up their individual misconceptions (Chapter 2): if they don’t understand an explanation the first time around, there usually isn’t a different one on offer. Another is that the automated assessment needed to put the “massive” in MOOC only works well at the lowest levels of Bloom’s Taxonomy (Section 6.2). It’s also now clear that learners have to shoulder much more of the burden of staying focused in a MOOC, that the impersonality of working online can encourage uncivil behavior and demotivate people, and that “available to everyone” actually means “available to everyone affluent enough to have high-speed internet and lots of free time.”

[Marg2015] examined 76 MOOCs on various subjects and found that while the organization and presentation of material was good, the quality of lesson design was poor. Closer to home,[Kim2017] studied thirty popular online coding tutorials, and found that they largely taught the same content the same way: bottom-up, starting with low-level programming concepts and building up to high-level goals. Most required learners to write programs and provided some form of immediate feedback, but this feedback was typically very shallow. Few explained when and why concepts are useful (i.e. they didn’t show how to transfer knowledge) or provided guidance for common errors, and other than rudimentary age-based differentiation, none personalized lessons based on prior coding experience or learner goals.

Personalized Learning

Few terms have been used and abused in as many ways as personalized learning. To most ed tech proponents, it means dynamically adjusting the pace of lessons based on learner performance, so that if someone answers several questions in a row correctly, the computer will skip some of the subsequent questions.

Doing this can produce modest improvements, but better is possible. For example, if many learners find a particular topic difficult, the teacher can prepare multiple alternative explanations of that point rather than accelerating a single path. That way, if one explanation doesn’t resonate, others are available. However, this requires a lot more design work on the teacher’s part, which may be why it hasn’t proven popular. And even if it does work, the effects are likely to be much less than some of its advocates believe. A good teacher makes a difference of 0.1–0.15 standard deviations in end-of-year performance in grade school [Chet2014] (see this article for a brief summary). It’s unrealistic to believe that any kind of automation can outdo this any time soon.

So how should the internet be used in teaching and learning tech skills? Its pros and cons are:

Learners can access more lessons, more quickly, than ever before.

Provided, of course, that a search engine considers those lessons worth indexing, that their internet service provider and government don’t block it, and that the truth isn’t drowned in a sea of attention-sapping disinformation.

Learners can access better lessons than ever before,

unless they are being steered toward second-rate material in order to redistribute wealth from the have-nots to the haves [McMi2017]. It’s also worth remembering that scarcity increases perceived value, so as online education becomes cheaper, it will increasingly become what everyone wants for someone else’s children.

Learners can access far more people than ever before as well.

But only if those learners actually have access to the required technology, can afford to use it, and aren’t driven offline by harassment or marginalized because they don’t conform to the social norms of whichever group is talking loudest. In practice, most MOOC users come from secure, affluent backgrounds [Hans2015].

Teachers can get far more detailed insight into how learners work.

So long as learners are doing things that are amenable to large-scale automated analysis and either don’t object to surveillance in the classroom or aren’t powerful enough for their objections to matter.

[Marg2015,Mill2016a,Nils2017] describe ways to accentuate the positives in the list above while avoiding the negatives:

Make deadlines frequent and well-publicized,

and enforce them so that learners will get into a work rhythm.

Keep synchronous all-class activities like live lectures to a minimum

so that people don’t miss things because of scheduling conflicts.

Have learners contribute to collective knowledge,

e.g. take notes together (Section 9.7), serve as classroom scribes, or contribute problems to shared problem sets (Section 5.3).

Encourage or require learners to do some of their work in small groups

that do have synchronous online activities such as a weekly online discussion. This helps learners stay engaged and motivated without creating too many scheduling headaches. (See Appendix 20 for some tips on how to make these discussions fair and productive.)

Create, publicize, and enforce a code of conduct

so that everyone can actually take part in online discussions (Section 9.1).

Use lots of short lesson episodes rather than a handful of lecture-length chunks

in order to minimize cognitive load and provide lots of opportunities for formative assessment. This also helps with maintenance: if all of your videos are short, you can simply re-record any that need maintenance, which is often cheaper than trying to patch longer ones.

Use video to engage rather than instruct.

Disabilities aside (Section 10.3), learners can read faster than you can talk. The exception to this rule is that video is actually the best way to teach people verbs (actions): short screencasts that show people how to use an editor, step through code in a debugger, and so on are more effective than screenshots with text.

Identify and clear up misconceptions early.

If data shows that learners are struggling with some parts of a lesson, create alternative explanations of those points and extra exercises for them to practice on.

All of this has to be implemented somehow, which means that you need some kind of teaching platform. You can either use an all-in-one learning management system like Moodle or Sakai, or assemble something yourself using Slack or Zulip for chat, Google Hangouts or appear.in for video conversations, and WordPress, Google Docs, or any number of wiki systems for collaborative authoring. If you are just starting out, pick whatever is easiest to set up and administer and is most familiar to your learners. If faced with a choice, the second consideration is more important than the first: you’re expecting people to learn a lot in your class, so it’s only fair for you to learn how to drive the tools they’re most comfortable with.

Assembling a platform for learning is necessary but not sufficient: if you want your learners to thrive, you need to create a community. Hundreds of books and presentations talk about how to do this, but most are based on their authors’ personal experiences.[Krau2016] is a welcome exception: while it predates the accelerating descent of Twitter and Facebook into weaponized abuse and misinformation, most of its findings are still relevant.[Foge2005] is also full of useful tips about the communities of practice that learners may hope to join; we explore some of its ideas in Chapter 13.

Freedom To and Freedom From

Isaiah Berlin’s 1958 essay “Two Concepts of Liberty” made a distinction between positive liberty, which is the ability to actually do something, and negative liberty, which is the absence of rules saying that you can’t do it. Online discussions usually offer negative liberty (nobody’s stopping you from saying what you think) but not positive liberty (many people can’t actually be heard). One way to address this is to introduce some kind of throttling, such as only allowing each learner to contribute one message per discussion thread per day. Doing this gives those with something to say a chance to say it, while clearing space for others to say things as well.

One other concern people have about teaching online is cheating. Day-to-day dishonesty is no more common in online classes than in face-to-face settings [Beck2014], but the temptation to have someone else write the final exam, and the difficulty of checking whether this happened, is one of the reasons educational institutions have been reluctant to offer credit for pure online classes. Remote exam proctoring is possible, but before investing in this, read [Lang2013]: it explores why and how learners cheat, and how courses can be structured to avoid giving them a reason to do so.

Video

A prominent feature of most MOOCs is their use of recorded video lectures. These can be effective: as mentioned in Chapter 8, a teaching technique called Direct Instruction based on precise delivery of a well-designed script has repeatedly been shown to be effective [Stoc2018]. However, scripts for direct instruction have to be designed, tested, and refined very carefully, which is an investment that many MOOCs have been unwilling or unable to make. Making a small change to a web page or a slide deck only takes a few minutes; making even a small change to a short video takes an hour or more, so the cost to the teacher of acting on feedback can be unsupportable. And even when they’re well made, videos have to be combined with activities to be beneficial:[Koed2015] estimated, “the learning benefit from extra doingto be more than six times that of extra watching or reading.”

If you are teaching programming, you may use screencasts instead of slides, since they offer some of the same advantages as live coding (Section 8.1).[Chen2009] offers useful tips for creating and critiquing screencasts and other videos; Figure [f:online-screencasting] (from[Chen2009]) reproduces the patterns that paper presents and the relationships between them. (It’s also a good example of a concept map (Section 3.1).)

Patterns for screencasting

So what makes an instructional video effective?[Guo2014] measured engagement by looking at how long learners watched MOOC videos, and found that:

  • Shorter videos are much more engaging—videos should be no more than six minutes long.

  • A talking head superimposed on slides is more engaging than voice over slides alone.

  • Videos that felt personal could be more engaging than high-quality studio recordings, so filming in informal settings could work better than professional studio work for lower cost.

  • Drawing on a tablet is more engaging than PowerPoint slides or code screencasts, though it’s not clear whether this is because of the motion and informality or because it reduces the amount of text on the screen.

  • It’s OK for teachers to speak fairly fast as long as they are enthusiastic.

One thing [Guo2014] didn’t address is the chicken-and-egg problem: do learners find a certain kind of video engaging because they’re used to it, so producing more videos of that kind will increase engagement simply because of a feedback loop? Or do these recommendations reflect some deeper cognitive processes? Another thing this paper didn’t look at is learning outcomes: we know that learner evaluations of courses don’t correlate with learning [Star2014,Uttl2017], and while it’s plausible that learners won’t learn from things they don’t watch, it remains to be proven that they do learn from things they do watch.

I’m a Little Uncomfortable

[Guo2014]’s research was approved by a university research ethics board, the learners whose viewing habits were monitored almost certainly clicked “agree” on a terms of service agreement at some point, and I’m glad to have these insights. On the other hand, the word “privacy” didn’t appear in the title or abstract of any of the dozens of papers or posters at the conference where these results were presented. Given a choice, I’d rather not know how engaged learners are than foster ubiquitous surveillance in the classroom.

There are many different ways to record video lessons; to find out which are most effective,[Mull2007a] assigned 364 first-year physics learners to online multimedia treatments of Newton’s First and Second Laws in one of four styles:

Exposition:

concise lecture-style presentation.

Extended Exposition:

as above with additional interesting information.

Refutation:

Exposition with common misconceptions explicitly stated and refuted.

Dialog:

Learner-tutor discussion of the same material as in the Refutation.

Refutation and Dialog produced the greatest learning gains compared to Exposition; learners with low prior knowledge benefited most, and those with high prior knowledge were not disadvantaged. Again, this highlights the importance of directly addressing learners’ misconceptions. Don’t just tell people what is: tell them what isn’t and why not.

Hybrid Models

Fully automated teaching is only one way to use the web in teaching. In practice, almost all learning in affluent societies has an online component today, either officially or through peer-to-peer back channels and surreptitious searches for answers to homework questions. Combining live and automated instruction allows teachers to use the strengths of both. In a traditional classroom, the teacher can answer questions immediately, but it takes days or weeks for learners to get feedback on their coding exercises. Online, it can take longer for a learner to get an answer, but they can get immediate feedback on their coding (at least for those kinds of exercises we can auto-grade).

Another difference is that online exercises have to be more detailed because they have to anticipate learners’ questions. I find that in-person lessons start with the intersection of what everyone needs to know and expands on demand, while online lessons have to include the union of what everyone needs to know because the teacher isn’t there to do the expanding.

In reality, the distinction between online and in-person is now less important for most people than the distinction between synchronous and asynchronous: do teachers and learners interact in real time, or is their communication spread out and interleaved with other activities? In-person will almost always be synchronous, but online is increasingly a mixture of both:

I think that our grandchildren will probably regard the distinction we make between what we call the real world and what they think of as simply the world as the quaintest and most incomprehensible thing about us.
— William Gibson

The most popular implementation of this blended future today is the flipped classroom, in which learners watch recorded lessons on their own and class time is used for discussion and working through problem sets. Originally described in [King1993], the idea was popularized as part of peer instruction (Section 9.2) and has been studied intensively over the past decade. For example,[Camp2016] compared learners who took an introductory computer science class online with those who took it in a flipped classroom. Completion of (unmarked) practice exercises correlated with exam scores for both, but the completion rate of rehearsal exercises by online learners was significantly lower than lecture attendance rates for in-person learners.

But if recordings are available, will learners still show up to class to do practice exercises?[Nord2017] examined the impact of recordings on both lecture attendance and learners’ performance at different levels. In most cases the study found no negative consequences of making recordings available; in particular, learners didn’t skip lectures when recordings are available (at least, not any more than they usually do). The benefits of providing recordings were greatest for learners early in their careers, but diminished as learners become more mature.

Another hybrid model brings online life into the classroom. Taking notes together is a first step (Section 9.7); pooling answers to multiple choice questions in real time using tools like Pear Deck and Socrative is another. If the class is small—say, a dozen to fifteen people—you can also have all of the learners join a video conference so that they can screenshare with the teacher. This allows them to show their work (or their problems) to the entire class without having to connect their laptop to the projector. Learners can also then use the chat in the video call to post questions for the teacher; in my experience, most of them will be answered by their fellow learners, and the teacher can handle the rest when they reach a natural break. This model helps level the playing field for remote learners: if someone isn’t able to attend class for health reasons or because of family or work commitments, they can still take part on a nearly-equal basis if everyone is used to collaborating online in real time.

I have also delivered classes using real-time remote instruction, in which learners are co-located at 2–6 sites with helpers present while I taught via streaming video (Section 18.1). This scales well, saves on travel costs, and allows the use of techniques like pair programming (Section 9.6). What doesn’t work is having one group in person and one or more groups remotely: with the best will in the world, the local participants get far more attention.

Online Engagement

[Nuth2007] found that there are three overlapping worlds in every classroom: the public (what the teacher is saying and doing), the social (peer-to-peer interactions between learners), and the private (inside each learner’s head). Of these, the most important is usually the social: learners pick up as much via cues from their peers as they do from formal instruction.

The key to making any form of online teaching effective is therefore to facilitate peer-to-peer interactions. To aid this, courses almost always have some kind of discussion forum.[Mill2016a] observed that learners use these in very different ways:

procrastinators are particularly unlikely to participate in online discussion forums, and this reduced participation, in turn, is correlated with worse grades. A possible explanation for this correlation is that procrastinators are especially hesitant to join in once the discussion is under way, perhaps because they worry about being perceived as newcomers in an established conversation. This aversion to jump in late causes them to miss out on the important learning and motivation benefits of peer-to-peer interaction.

[Vell2017] analyzes discussion forum posts from 395 CS2 students at two universities by dividing them into four categories:

Active:

request for help that does not display reasoning and doesn’t display what the student has already tried or already knows.

Constructive:

reflect students’ reasoning or attempts to construct a solution to the problem.

Logistical:

course policies, schedules, assignment submission, etc.

Content clarification:

request for additional information that doesn’t reveal the student’s own thinking.

They found that constructive and logistical questions dominated, and that constructive questions correlated with grades. They also found that students rarely ask more than one active question in a course, and that these don’t correlate with grades. While this is disappointing, knowing it helps set teachers’ expectations: while we might all want our courses to have lively online communities, we have to accept that most won’t, or that most learner-to-learner discussion will take place through channels that they are already using that we may not be part of.

Co-opetition

[Gull2004] describes an online coding contest that combines collaboration and competition. The contest starts when a problem description is posted along with a correct but inefficient solution. When it ends, the winner is the person who has made the greatest overall contribution to improving the performance of the overall solution. All submissions are in the open, so that participants can see one another’s work and borrow ideas from each other. As the paper shows, the final solution is almost always a hybrid borrowing ideas from many people.

[Batt2018] described a small-scale variation of this in an introductory computing class. In stage one, each learner submitted a programming project individually. In stage two, learners were paired to create an improved solution to the same problem. The assessment indicates that two-stage projects tend to improve learners’ understanding and that they enjoyed the process. Projects like these not only improve engagement, they also give participants more experience building on someone else’s code.

Discussion isn’t the only way to get learners to work together online.[Pare2008] and [Kulk2013] report experiments in which learners grade each other’s work, and the grades they assign are then compared with the grades given by graduate-level teaching assistants or other experts. Both found that learner-assigned grades agreed with expert-assigned grades as often as the experts’ grades agreed with each other, and that a few simple steps (such as filtering out obviously unconsidered responses or structuring rubrics) decreased disagreement even further. And as discussed in Section 5.3, collusion and bias are not significant factors in peer grading.

Trust, but Educate

The most common way to measure the validity of feedback is to compare learners’ grades to experts’ grades, but calibrated peer review (Section 5.3) can be equally effective. Before asking learners to grade each others’ work, they are asked to grade samples and compare their results with the grades assigned by the teacher. Once the two align, the learner is allowed to start giving grades to peers. Given that critical reading is an effective way to learn, this result may point to a future in which learners use technology to make judgments, rather than being judged by technology.

One technique we will definitely see more of in coming years is online streaming of live coding sessions [Raj2018,Haar2017]. This has most of the benefits discussed in Section 8.1, and when combined with collaborative note-taking (Section 9.7) it can be a close approximation to an in-class experience.

Looking even further ahead,[Ijss2000] identified four levels of online presence, from realism (we can’t tell the difference) through immersion (we forget the difference) and involvement (we’re engaged but aware of the difference) to suspension of disbelief (we are doing most of the work). Crucially, they distinguish physical presence, which is the sense of actually being somewhere, and social presence, which is the sense of being with others. The latter is more important in most learning situations, and again, we can foster it by using learners’ everyday technology in the classroom. For example,[Deb2018] found that real-time feedback on in-class exercises using learners’ own mobile devices improved concept retention and learner engagement while reducing failure rates.

Online and asynchronous teaching are both still in their infancy. Centralized MOOCs may prove to be an evolutionary dead end, but there are still many other promising models to explore. In particular,[Broo2016] describes fifty ways that groups can discuss things productively, only a handful of which are widely known or implemented online. If we go where our learners are technologically rather than requiring them to come to us, we may wind up learning as much as they do.

Exercises

Two-Way Video (pairs/10)

Record a 2–3 minute video of yourself doing something, then swap machines with a partner so that each of you can watch the other’s video at 4x speed. How easy is it to follow what’s going on? What if anything did you miss?

Viewpoints (individual/10)

According to [Irib2009], different disciplines focus on different factors affecting the success or otherwise of online communities:

Business:

customer loyalty, brand management, extrinsic motivation.

Psychology:

sense of community, intrinsic motivation.

Sociology:

group identity, physical community, social capital, collective action.

Computer Science:

technological implementation.

Which of these perspectives most closely corresponds to your own? Which are you least aligned with?

Helping or Harming (small groups/30)

Susan Dynarski’s article in the New York Times explains how and why schools are putting students who fail in-person courses into online courses, and how this sets them up for even further failure. Read the article and then:

  1. In small groups, come up with 2–3 things that schools could do to compensate for these negative effects and create rough estimates of their per-learner costs.

  2. Compare your suggestions and costs with those of other groups. How many full-time teaching positions do you think would have to be cut in order to free up resources to implement the most popular ideas for a hundred learners?

  3. As a class, do you think that would be a net benefit for the learners or not?

Budgeting exercises like this are a good way to tell who’s serious about educational change. Everyone can think of things they’d like to do; far fewer are willing to talk about the tradeoffs needed to make change happen.

Exercise Types

Every good carpenter has a set of screwdrivers, and every good teacher has different kinds of exercises to check what learners are actually learning, help them practice their new skills, and keep them engaged. This chapter starts by describing several kinds of exercises you can use to check if your teaching has been effective. It then looks at the state of the art in automated grading, and closes by exploring discussion, projects, and other important kinds of work that require more human attention to assess. Our discussion draws in part on the Canterbury Question Bank [Sand2013], which has entries for various languages and topics in introductory computing.

The Classics

As Section 2.1 discussed, multiple choice questions (MCQs) are most effective when the wrong answers probe for specific misconceptions. They are usually designed to test the lower levels of Bloom’s Taxonomy (Section 6.2), but can also require learners to exercise judgment.

A Multiple Choice Question

In what order do operations occur when the computer evaluates the expression price = addTaxes(cost - discount)?

  1. subtraction, function call, assignment

  2. function call, subtraction, assignment

  3. function call, then assignment and subtraction simultaneously

  4. none of the above

The second classic type of programming exercise is code and run (C&R), in which the learner writes code that produces a specified output. C&R exercises can be as simple or as complex as the teacher wants, but when used in class, they should be brief and have only one or two plausible correct answers. It’s often enough to ask novices to calculate and print a single value or call a specific function: experienced teachers often forget how hard it can be to figure out which parameters go where. For more advanced learners, figuring out which function to call is more engaging and a better gauge of their understanding.

Code & Run

The variable picture contains a full-color image read from a file. Using one function, create a black and white version of the image and assign it to a new variable called monochrome.

Write and run exercises can be combined with MCQs. For example, this MCQ can only be answered by running the Unix ls command:

Combining MCQ with Code & Run

You are in the directory /home. Which of the following files is not in that directory?

  1. autumn.csv

  2. fall.csv

  3. spring.csv

  4. winter.csv

C&Rs help people practice the skills they most want to learn, but they can be hard to assess: there can be lots of unexpected ways to get the right answer, and people will be demoralized if an automatic grading system rejects their code because it doesn’t match the teacher’s. One way to reduce how often this occurs is to assess only their output, but that doesn’t give them feedback on how they are programming. Another is to give them a small test suite they can run their code against before they submit it (at which point it is run against a more comprehensive set of tests). Doing this helps them figure out if they have completely misunderstood the intent of the exercise before they do anything that they think might cost them grades.

Instead of writing code that satisfies some specification, learners can be asked to write tests to determine whether a piece of code conforms to a spec. This is a useful skill in its own right, and doing it may give learners a bit more sympathy for how hard their teachers work.

Inverting Code & Run

The function monotonic_sum calculates the sum of each section of a list of numbers in which the values are strictly increasing. For example, given the input [1, 3, 3, 4, 5, 1], the output is [4, 12, 1]. Write and run unit tests to determine which of the following bugs the function contains:

  • Considers every negative number the start of a new sub-sequence.

  • Does not include the first value of each sub-sequence in the sub-sum.

  • Does not include the last value of each sub-sequence in the sub-sum.

  • Only re-starts the sum when values decrease rather than fail to increase.

Fill in the blanks is a refinement of C&R in which the learner is given some starter code and has to complete it. (In practice, most C&R exercises are actually fill in the blanks because the teacher provides comments to remind the learners of the steps they should take.) Questions of this type are the basis for faded examples; as discussed in Chapter 4, novices often find them less intimidating than writing all the code from scratch, and since the teacher has provided most of the answer’s structure, submissions are much more predictable and therefore easier to check.

Fill in the Blanks

Fill in the blanks so that the code below prints the string ’hat’.

text = 'all that it is'
slice = text[____:____]
print(slice)

Parsons Problems also avoid the “blank screen of terror” problem while allowing learners to concentrate on control flow separately from vocabulary.[Pars2006,Eric2015,Morr2016,Eric2017] Tools for building and doing Parsons Problems online exist [Ihan2011], but they can be emulated (albeit somewhat clumsily) by asking learners to rearrange lines of code in an editor.

Parsons Problem

Rearrange and indent these lines to sum the positive values in a list. (You will need to add colons in appropriate places as well.)

total = 0
if v > 0
total += v
for v in values

Note that giving learners more lines than they need, or asking them to rearrange some lines and add a few more, makes Parsons Problems significantly harder [Harm2016].

Tracing

Tracing execution is the inverse of a Parsons Problem: given a few lines of code, the learner has to trace the order in which those lines are executed. This is an essential debugging skill and a good way to solidify learners’ understanding of loops, conditionals, and the evaluation order of function and method calls. The easiest way to implement it is to have learners write out a sequence of labeled steps. Having them choose the correct sequence from a set (i.e. presenting this as an MCQ) adds cognitive load without adding value, since they have to do all the work of figuring out the correct sequence, then search for it in the list of options.

Tracing Execution Order

In what order are the labeled lines in this block of code executed?

A)     vals = [-1, 0, 1]
B)     inverse_sum = 0
       try:
           for v in vals:
C)             inverse_sum += 1/v
       except:
D)         pass

Tracing values is similar to tracing execution, but instead of spelling out the order in which code is executed, the learner lists the values that one or more variables take on as the program runs. One way to implement this is to give the learner a table whose columns are labeled with variable names and whose rows are labeled with line numbers, and ask them to fill in the values taken on by the variables on those lines.

Tracing Values

What values do left and right take on as this program executes?

A) left = 23
B) right = 6
C) while right:
D)     left, right = right, left % right
Line left right

You can also require learners to trace code backwards to figure out what the input must have been for the code to produce a particular result [Armo2008]. These reverse execution problems require search and deductive reasoning, and when the output is an error message, they help learners develop valuable debugging skills.

Reverse Execution

Fill in the missing number in values that caused this function to crash.

values = [ [1.0, -0.5], [3.0, 1.5], [2.5, ___] ]
runningTotal = 0.0
for (reading, scaling) in values:
    runningTotal += reading / scaling

Minimal fix exercises also help learners develop debugging skills. Given a few lines of code that contain a bug, the learner must find it and make one small change to fix it. Making the change can be done using C&R, while identifying it can be done as a multiple choice question.

Minimal Fix

This function is supposed to test whether a number lies within a range. Make one small change so that it actually does so.

def inside(point, lower, higher):
    if (point <= lower):
        return false
    elif (point <= higher):
        return false
    else:
        return true

Theme and variation exercises are similar, but the learner is asked to make a small alteration that changes the output in some specific way instead of making a change to fix a bug. Allowed changes can include changing a variable’s initial value, replacing one function call with another, swapping inner and outer loops, or changing the order of tests in a complex conditional. Again, this kind of exercise gives learners a chance to practice a useful real-world skill: the fastest way to produce the code they need is to tweak code that already does something close.

Theme and Variations

Change the inner loop in the function below so that it fills the upper left triangle of an image with a specified color.

function fillTriangle(picture, color) is
    for x := 1 to picture.width do
        for y := 1 to picture.height do
            picture[x, y] = color
        end
    end
end

Refactoring exercises are the complement of theme and variation exercises: given a working piece of code, the learner has to modify it in some way without changing its output. For example, the learner could replace loops with vectorized expressions or simplify the condition in a while loop. This is also a useful real-world skill, but there are often so many ways to refactor code that grading requires human inspection.

Refactoring

Write a single list comprehension that has the same effect as this loop.

result = []
for v in values:
    if len(v) > threshold:
        result.append(v)

Diagrams

Having learners draw concept maps and other diagrams gives insight into how they’re thinking (Section 3.1), but free-form diagrams take human time and judgment to assess. Labeling diagrams, on the other hand, is almost as powerful pedagogically but much easier to scale.

Rather than having learners create diagrams from scratch, provide them with a diagram and a set of labels and have them put the latter in the right places on the former. The diagram can be a data structure (“after this code is executed, which variables point to which parts of this structure?”), a chart (“match each of these pieces of code with the part of the chart it generated”), or the code itself (“match each term to an example of that program element”).

Labeling a Diagram

Figure [f:exercises-labeling] shows how a small fragment of HTML is represented in memory. Put the labels 1–9 on the elements of the tree to show the order in which they are reached in a depth-first traversal.

Labeling a diagram

Another way to use diagrams is to give learners the pieces of the diagram and ask them to arrange them correctly. This is a visual equivalent of a Parsons Problem, and you can provide as much or as little of a skeleton to help with placement as you think they’re ready for. I have fond memories of trying to place resistors and capacitors in a circuit diagram in order to get the right voltage at a certain point, and have seen teachers give learners a fixed set of Scratch blocks and ask them to create a particular drawing using only those blocks.

Matching problems can be thought of as a special case of labeling in which the “diagram” is a column of text and the labels are taken from the other column. One-to-one matching gives the learner two lists of equal length and asks them to pair corresponding items, e.g. “match each piece of code with the output it produces.”

Matching

Match each regular expression operator in Figure [f:exercises-matching] with what it does.

Matching items

With many-to-many matching the lists aren’t the same length, so some items may be matched to several others and others may not be matched at all. Many-to-many are more difficult because learners can’t do easy matches first to reduce their search space. Matching problems can be implemented by having learners submit lists of matching pairs (such as “A3, B1, C2”), but that’s clumsy and error-prone. Having them recognize a set of correct pairs in an MCQ is even worse, as it’s painfully easy to misread. Drawing or dragging works much better, but may require some work to implement.

Ranking is a special case of matching that is (slightly) more amenable to answering via lists, since our minds are pretty good at detecting errors or anomalies in sequences. The ranking criteria determine the level of reasoning required. If you have learners order sorting algorithms from fastest to slowest you are probably exercising recall (i.e. asking them recognizing the algorithms’ names and know their properties), while asking them to rank solutions from most robust to most brittle exercises reasoning and judgment.

Summarization also requires learners to use higher-order thinking and gives them a chance to practice a skill that is very useful when reporting bugs. For example, you can ask learners, “Which sentence best describes how the output of f changes as x varies from 0 to 10?” and then given several options as a multiple choice question. You can also ask for very short free-form answers to questions in constrained domains, such as, “What is the key feature of a stable sorting algorithm?” We can’t fully automate checks for these without a frustrating number of false positives (accepting wrong answers) and false negatives (rejecting correct ones), but questions of this kind lend themselves well to peer grading (Section 5.3).

Automatic Grading

Automatic program grading tools have been around longer than I have been alive: the earliest published mention dates from 1960 [Holl1960], and the surveys published in [Douc2005,Ihan2010] mention many specific tools by name. Building such tools is a lot more complex than it might first seem. How are assignments represented? How are submissions tracked and reported? Can learners co-operate? How can submissions be executed safely?[Edwa2014a] is an entire paper devoted to an adaptive scheme for detecting and managing infinite loops in code submissions, and that’s just one of the many issues that comes up.

When discussing auto-graders, it is important to distinguish learner satisfaction from learning outcomes. For example,[Magu2018] switched informal programming labs for a second-year CS course to a weekly machine-evaluated test using an auto-grader. Learners didn’t like the automated system, but the overall failure rate for the course was halved and the number of learners gaining first class honors tripled. In contrast,[Rubi2014] also began to use an auto-grader designed for competitions, but saw no significant decrease in their learners’ dropout rates; once again, learners made some negative comments about the tool, which the authors attribute to the quality of its feedback messages rather than to dislike of auto-grading.

[Srid2016] took a different approach. They used fuzz testing (i.e. randomly generated test cases) to check whether learner code does the same thing as a reference implementation supplied by the teacher. In the first project of a 1400-learner introductory course, fuzz testing caught errors that were missed by a suite of hand-written test cases for more than 48% of learners.

[Basu2015] gave learners a suite of solution test cases, but learners had to unlock each one by answering questions about its expected behavior before they were allowed to apply it to their proposed solution. For example, suppose learners had to write a function to find the largest adjacent pair of numbers in a list. Before being allowed to use the question’s tests, they had to choose the right answer to, “What does largestPair(4, 3, -1, 5, 3, 3) produce?” In a 1300-person university course, the vast majority of learners chose to validate their understanding of test cases this way before attempting to solve problems, and then asked fewer questions and expressed less confusion about assignments.

Against Off-the-Shelf Tools

It’s tempting to use off-the-shelf style checking tools to grade learners’ code. However,[Nutb2016] initially found no correlation between human-provided marks and style-checker rule violations. Sometimes this was because learners violated one rule many times (thereby losing more points than they should have), but other times it was because they submitted the assignment starter code with few alterations and got more points than they should have.

Even tools built specifically for teaching can fall short of teachers’ needs.[Keun2016a,Keun2016b] looked at the messages produced by 69 auto-grading tools. They found that the tools often do not give feedback on how to fix problems and take the next step. They also found that most teachers cannot easily adapt most of the tools to their needs: like many workflow tools, they tend to enforce their creators’ unrecognized assumptions about how institutions work. Their classification scheme is a useful shopping list when looking at tools of this kind.

[Buff2015] presents a well-informed reflection on the whole idea of providing automated feedback. Their starting point is that, “Automated grading systems help learners identify bugs in their code, [but] may inadvertently discourage learners from thinking critically and testing thoroughly and instead encourage dependence on the teacher’s tests.” One of the key issues they identified is that a learner may thoroughly test their code, but the feature may still not be implemented according to the teacher’s specifications. In this case, the “failure” is not caused by a lack of testing but by a misunderstanding of the requirements, and it is unlikely that more testing will expose the problem. If the auto-grading system doesn’t provide insightful, actionable feedback, this experience will only frustrate the learner.

In order to provide that feedback,[Buff2015]’s system identifies which methods in the learner’s code are executed by failing tests so that the system can associate failed tests with particular features within the learner’s submission. The system decides whether specific hints have been “earned” by seeing whether the learner has tested the associated feature enough, so learners cannot rely on hints instead of doing tests.

[Srid2016] describes some other approaches for sharing feedback with learners when automatically testing their code. The first is to provide the expected output for the tests—but then learners hard-code output for those inputs (because anything that can be gamed will be). The second is to report the pass/fail results for the learners’ code, but only supply the actual inputs and outputs of the tests after the submission date. However, telling learners that they are wrong but not telling them why is frustrating.

A third option is to use a technique called hashing to generate a value that depends on the output but doesn’t reveal it. If the user produces exactly the right output then its hash will unlock the solution, but it is impossible to work backward from the hash to figure out what the output is supposed to be. Hashing requires more work and explanation to set up, but strikes a good balance between revealing answers prematurely and not revealing them when it would help.

Higher-Level Thinking

Many other kinds of programming exercises are hard for teachers to assess in a class with more than handful of learners and equally hard for automated platforms to assess at all. Larger programming projects are (hopefully) what classes are building toward, but the only way to give feedback is case by case.

Code review is also hard to grade automatically in general, but can be tackled if learners are given a list of faults to look for and asked to match particular comments against particular lines of code. For example, the learner can be told that there are two indentation errors and one bad variable name and asked to point them out. If they are more advanced, they could be given half a dozen kinds of remarks they could make about the code without being told how many of each they should find.

[Steg2016b] is a good starting point for a code style rubric, while [Luxt2009] looks at peer review in programming classes more generally. If you are going to have learners do reviews, use calibrated peer review (Section 5.3) so that they have models of what good feedback should look like.

Code Review

Mark the problems in each line of code using the rubric provided.

01)  def addem(f):
02)      x1 = open(f).readlines()
03)      x2 = [x for x in x1 if x.strip()]
04)      changes = 0
05)      for v in x2:
06)          print('total', total)
07)          tot = tot + int(v)
08)      print('total')
1. poor variable name 2. use of undefined variable
3. missing return value 4. unused variable

Exercises

Code and Run (pairs/10)

Create a short C&R exercise, then trade with a partner and see how long it takes each of you to understand and do the other’s exercise. Were there any ambiguities or misunderstandings in the exercise description?

Inverting Code and Run (small groups/15)

Form groups of 4–6 people. Have each member of the group create an inverted C&R exercise that requires people to figure out what input produces a particular output. Pick two at random and see how many different inputs the group can find that satisfy the requirements.

Tracing Values (pairs/10)

Write a short program (10–15 lines), trade with a partner, and trace how the variables in the program change value over time. What differences are there in how you and your partner wrote down your traces?

Refactoring (small groups/15)

Form groups of 3–4 people. Have each person select a short piece of code (10–30 lines long) they have written that isn’t as tidy as it could be, then choose one at random and have everyone in the group tidy it up independently. How do your cleaned-up versions differ? How well or how poorly would you be able to accommodate all of these variations if marking automatically or in a large class?

Labeling a Diagram (pairs/10)

Draw a diagram showing something that you have explained recently: how browsers fetch data from servers, the relationship between objects and classes, or how data frames are indexed in R. Put the labels on the side and ask your partner to place them.

Pencil-and-Paper Puzzles (whole class/15)

[Butl2017] describes a set of pencil-and-paper puzzles that can be turned into introductory programming assignments and reports that these assignments are enjoyed by learners and encourage meta-cognition. Think of a simple pencil-and-paper puzzle or game you played as a child and describe how you would turn it into a programming exercise.

Counting Failures (pairs/15)

Any useful estimate of how much time an exercise needs must take into account how frequent failures are and how much time is lost to them. For example, editing text files seems like a simple task, but what about finding those files? Most GUI editors save things to the user’s desktop or home directory; if the files used in a course are stored somewhere else, a substantial fraction won’t be able to navigate to the right directory without help. (If this seems like a small problem to you, please revisit the discussion of expert blind spot in Chapter 3.)

Working with a partner, make a list of “simple” things you have seen go wrong in exercises you have used or taken. How often do they come up? How long do they take learners to fix on their own or with help? How much time do you currently budget in class to deal with them?

Speaking of Timings (individual/10)

How accurate have the time estimates on the exercises in this book been so far?

Construyendo una comunidad de práctica

No tienes que arreglar todos los males de la sociedad para enseñar programación, pero si te tienes que involucrar en lo que sucede fuera de tu clase si quieres que las personas aprendan. Esto se aplica tanto a las personas que enseñan como a las que aprenden: muchos docentes free-range comienzan como voluntarios o como trabajadores a medio tiempo y deben hacer malabares con sus clases y muchos otros compromisos. Lo que sucede fuera del aula es tan importante para su éxito como lo es para sus estudiantes, así que la mejor manera de ayudar a ambos es fomentar una comunidad de enseñanza.

Finlandia y por qué no

Las escuelas de Finlandia se encuentran entre las más exitosas del mundo, pero como Anu Partanen señaló, no han tenido éxito solas. Los intentos de otros países por adoptar métodos de enseñanza finlandeses están condenados al fracaso a menos que esos países también garanticen que los niños (y sus padres) estén seguros, bien alimentados, y tratados justamente por las cortes [Sahl2015,Wilk2011]. Esto no es ninguna sorpresa dado lo que sabemos sobre la importancia de la motivación para el aprendizaje (Chapter 10): todas las personas lo harán peor si creen que el sistema es impredecible, injusto o indiferente.

Un marco para pensar sobre las comunidades de enseñanza es el aprendizaje situado, que se centra en cómo una participación periférica legítima lleva a las personas a convertirse en miembros de una comunidad de práctica [Weng2015]. Analizando esos términos, una comunidad de práctica es un grupo de personas unidas por su interés en alguna actividad, como tejer o la física de partículas. La participación periférica legítima significa realizar tareas simples y de bajo riesgo que la comunidad reconoce como contribuciones válidas: hacer tu primera bufanda, llenar sobres durante una campaña electoral, o revisar documentación de software de código abierto.

El aprendizaje situado se centra en la transición de ser un recién llegado a ser aceptado como un compañero por aquellos que ya son miembros de la comunidad. Esto generalmente significa comenzar con tareas y herramientas simples, luego hacer tareas similares con herramientas más complejas, y finalmente abordar el mismo trabajo que los practicantes avanzados. Por ejemplo, los niños que aprenden música pueden comenzar tocando canciones infantiles en una grabadora o un ukelele, luego tocan otras canciones simples en una trompeta o saxofón en una banda, y finalmente comienzan a explorar sus propios gustos musicales. Las formas comunes de apoyar esta progresión incluyen:

Resolución de problemas:

“No puedo avanzar — ¿podemos trabajar en el diseño de esta lección juntos? ”

Requests for information:

“¿Cuál es la contraseña para el administrador de la lista de correo? ”

Búsqueda de experiencia:

“¿Alguien ha tenido un alumno con discapacidad para leer? ”

Compartir recursos:

“El año pasado armé un sitio web para una clase que puedes usar como punto de partida ”.

Coordinación:

“¿Podemos hacer nuestros pedidos de camisetas juntos para obtener un descuento? ”

Construir un argumento:

“Será más fácil convencer a mi jefe para que haga cambios si sé cómo otros campistas hacen esto ”.

Documentar proyectos:

“Hemos tenido este problema cinco veces ahora. Vamos a escribirlo de una vez por todas ”.

Mapeo del conocimiento:

“¿Qué otros grupos están haciendo cosas como esta en vecindarios o ciudades cercanas? ”

Visitas:

“¿Podemos venir a ver su programa extracurricular? Necesitamos establecer uno en nuestra ciudad ”.

En términos generales, Una comunidad de práctica puede ser:

Comunidad de acción:

personas enfocadas en un objetivo compartido, como conseguir que un candidato sea elegido.

Comunidad de preocupación:

los miembros se unen por un problema compartido, como tratar una enfermedad a largo plazo.

Comunidad de interés:

enfocado en un amor compartido por algo como el backgammon o tejer.

Comunidad de lugar:

personas que viven o trabajan juntas.

La mayoría de las comunidades son mezclas de estos tipos, como las personas en Toronto a las que les gusta enseñar tecnología. El enfoque de una comunidad también puede cambiar con el tiempo: por ejemplo, un grupo de apoyo para personas que padecen depresión (comunidad de interés) puede decidir recaudar fondos para mantener una línea de ayuda (comunidad de acción). Mantener el funcionamiento la línea de ayuda puede convertirse en el foco del grupo (comunidad de interés).

Sopa, luego himnos

Los manifiestos son divertidos de escribir, pero la mayoría de las personas se unen a una comunidad de voluntarios para ayudar y ser ayudados en lugar de discutir sobre la redacción de una declaración de visión327. Por lo tanto, debes centrarte en qué personas pueden crear lo que otros miembros de la comunidad usarán de inmediato. Una vez que tu organización demuestre que puede lograr cosas pequeñas, la gente estará más segura de que vale la pena ayudarte con proyectos más grandes. Es el momento de preocuparse por definir los valores que guiarán a sus miembros.

Aprende, luego haz

El primer paso para construir una comunidad es decidir si deberías construirla, o si sería más efectivo unirte a una organización existente. Miles de grupos ya están enseñando habilidades tecnológicas a las personas, desde los 4-H Club y los programas de alfabetización hasta organizaciones sin fines de lucro que te inician en la programación como Black Girls Code y Bridge. Unirse a un grupo existente te dará ventaja en la enseñanza, un conjunto inmediato de colegas, y una oportunidad de aprender más sobre cómo manejar las cosas; Con suerte, ir aprendiendo esas habilidades mientras haces una contribución inmediata será más importante que poder decir que sos el fundador o líder de algo nuevo.

Ya sea que te unas a un grupo existente o inicies uno propio, serás más efectivo si lees un poco sobre como organizar una comunidad.[Alin1989,Lake2018] probablemente es el trabajo más conocido sobre organización de grupos de base, mientras que  [Brow2007,Midw2010,Lake2018] son manuales prácticos basados en décadas de experiencia. Si quieres leer más profundamente,[Adam1975] es la historia de la Highlander Folk School, cuyo enfoque ha sido emulado por muchos grupos exitosos, mientras que  [Spal2014] es una guía para enseñar a adultos escrita por alguien con profundas raíces personales en la organización y NonprofitReady.org ofrece capacitación profesional gratuita.

Cuatro pasos

Todas las personas que se involucren con tu organización (incluyéndote) pasa por cuatro fases: reclutamiento, incorporación, retención y retiro. No necesitas preocuparte por este ciclo cuando estés comenzando, pero vale la pena pensar en este tema tan pronto como más de un puñado de personas no fundadoras estén involucradas.

El primer paso es reclutar voluntarias y voluntarios. Tu estrategia de marketing debería ayudarte con esto haciendo que tu organización sea localizable y que tu misión y valor sean claros para las personas que quieran involucrarse

(Chapter 14). Comparte historias que ejemplifiquen el tipo de ayuda que desea así como historias sobre las personas a las que estás ayudando, y deja en claro que hay muchas maneras de involucrarse. (Discutiremos esto con más detalle en la siguiente sección).

Tu mejor fuente de nuevos reclutas son tus propias clases: “ver uno, hacer uno, enseñar uno” ha funcionado bien para organizaciones voluntarias durante el tiempo que ha habido organizaciones voluntarias. Asegúrate que cada clase u otro encuentro termina diciendole a las personas cómo pueden ayudar y que su ayuda será bienvenida. Las personas que vienen a ti de esta manera sabrán lo que haces y tienen la experiencia reciente de ser receptores de lo que ofreces, lo que ayuda a tu organización a evitar el punto ciego experto colectivo (Chapter 3).

Empieza pequeño

Ben Franklin observó que una persona que le ha hecho un favor a alguien es más probable que les vuelva a hacer otro favor que alguien que había recibido un favor de esa persona. Por lo tanto, pedirle a la gente que haga algo pequeño por ti es un buen paso para lograr que hagan algo más grande. Una forma natural de hacer esto al enseñar es pedirle a la gente que corrijan la redacción o errores ortográficos en los materiales de tus lecciones, o que sugieran nuevos ejercicios o ejemplos. Si tus materiales están escritos de una manera mantenible (Section 6.3), les da la oportunidad de practicar algunas habilidades útiles y te da la oportunidad de comenzar una conversación que podría conducir a una nueva incorporación a tu organización.

La mitad del ciclo de vida voluntario es la incorporación y la retención, que cubriremos en las Secciones 13.313.4. El último paso es cuando un miembro deja de ser parte de la organización: eventualmente, todas las personas se mueven, y las organizaciones saludables planean ese momento. Algunas cosas simples pueden hacer que, tanto la persona que se va, como todas las que se quedan se sientan de forma positiva sobre el cambio:

Pide a las personas que sean explícitas sobre su partida.

para que todos sepan que realmente se han ido.

Asegúrate de que no se sientan con verguenza de irse

o sobre cualquier otra cosa.

Dales la oportunidad de transmitir sus conocimientos.

Por ejemplo, puedes pedirles que mentoreen a alguien durante algunas semanas como su última contribución, o que alguien se queda en la organización les realice una entrevista para recopilar cualquier historia que valga la pena volver a contar.

Asegúrate que entreguen las llaves.

Es incómodo descubrir seis meses después de que alguien se fué que es la única persona que sabe cómo reservar un lugar para el picnic anual.

Contáctalos 2 a 3 meses después de que se vayan

para ver si tienen más ideas sobre lo que funcionó y lo que no funcionó mientras estuvieron contigo, o algún consejo para ofrecer que tampoco pensaron dar o que se sentían incómodos de dar mientras salian por la puerta.

Agradeceles,

tanto cuando se van como la próxima vez que tu grupo se reúna.

Un manual que falta

Se han escrito miles de libros sobre cómo iniciar una empresa. Solo unos pocos describen cómo terminar una o como dejarla con gracia, a pesar de que hay un final para cada comienzo. Si alguna vez escribes uno, por favor hacemelo saber.

Incorporación

Después de decidir formar parte de un grupo, la gente necesita ponerse al día, y[Shol2019] resume lo que sabemos sobre hacer esto. La primera regla es tener y hacer cumplir un código de conducta (Section 9.1), y encontrar una parte independiente que esté dispuesta a recibir y revisar informes de comportamiento inapropiado. Alguien fuera de la organización tendrá la objetividad que los miembros de la organización pueden carecer, y puede proteger a quienes reporten los incidentes para que no duden en plantear problemas relacionados con las personas encargadas del proyecto por temor a represalias o daños a su reputación. El equipo que lidera el proyecto debe publicar las decisiones donde se aplique el código de conducta para que la comunidad reconozca que el código es significativo.

La siguiente regla más importante es ser amigable. Como dijo Fogel  [Foge2005], “Si un proyecto no ahce una primera buena impresión, los recién llegados van a esperar mucho tiempo antes de darle una segunda oportunidad.” Otros autores han confirmado empíricamente la importancia de los entornos sociales amables y educados en proyectos abiertos [Sing2012,Stei2013,Stei2018]:

Publica un mensaje de bienvenida

en las páginas de redes sociales, canales de Slack, foros o listas de correo electrónico del proyecto. Los proyectos podrían considerar mantener un canal o lista de “Bienvenida” exclusivos para ese fin, donde alguna de las personas que lidera el proyecto o gestiona la comunidad escribe una breve publicación pidiendo a los recién llegados que se presenten.

Ayuda a las personas a encontrar una manera de hacer una contribución inicial,

como etiquetar lecciones particulares o talleres que necesitan trabajo como “adecuados para los recién llegados” y pidiendo a los miembros ya establecidos que no los arreglen para asegurar que haya lugares adecuados para que los recién llegados comiencen a trabajar.

Dirije a las personas recién llegadas a otras personas del proyecto similares a ellas

para demostrarles que pertenecen.

Indícale los recursos esenciales del proyecto a las personas recién llegadas

como las pautas de contribución.

Designa una o dos personas del proyecto como contacto.

para cada nueva presona recien llegada. Hacer esto puede hacer que quienes recién llegan sean menos reacios a hacer preguntas.

Una tercera regla que ayuda a todas las personas (no solo a quienes recien llegan) es hacer que el conocimiento se pueda encontrar y mantenerlo actualizado. Las personas nuevas son como exploradores que deben orientarse dentro de un paisaje desconocido [Dage2010]. La información que se distribuye generalmente hace que las nuevas personas se sientan perdidas y desorientadas. Dadas las diferentes posibilidades de lugares para mantener la información (por ejemplo, wikis, archivos en control de versiones, documentos compartidos, tweets antiguos o mensajes de Slack y archivos de correo electrónico) es importante mantener la información sobre un tema específico consolidada en un solo lugar para que las personas nuevas no necesiten navegar por múltiples fuentes de datos para encontrar lo que necesitan. Organizar la información hace que los personas recién llegadas tengan más confianza y mejor orientación [Stei2016].

Finalmente, reconoce las primeras contribuciones de quienes recién inician y piensa dónde y cómo podrían ayudar a largo plazo. Una vez que han realizado exitosamente su primera contribución, es probable que ambos tengan una mejor idea de lo que tienen para ofrecer y cómo el proyecto puede ayudarlos. Ayuda a las personas nuevas a encontrar el siguiente problema en el que tal vez quieran trabajar o guíalos al siguiente tema que podrían disfrutar leyendo. En particular, animarles a ayudar a la próxima ola de nuevas personas es una buena manera de reconocer lo que han aprendido y una forma efectiva de transmitirlo.

Retención

Si tu gente no se divierte, algo está muy mal.
— Saul Alinsky

Quienes participan de la comunidad no deberían esperar disfrutar cada momento de su trabajo con tu organización, pero si no disfrutan nada de eso, no se quedarán. El disfrute no necesariamente significa tener una fiesta anual: la gente puede disfrutar cocinar, entrenar a otras personas, o simplemente trabajar en silencio junto a otros y otras. Hay varias cosas que toda organización debe hacer para garantizar que las personas obtienen algo que valoran de su trabajo:

Pregunta a las personas qué quieren en vez de adivinar.

Asi como no sos tus estudiantes(Section 6.1), probablemente seas diferente de otras personas de tu organización. Pregúntales a las personas qué quieren hacer, que se sienten cómodas haciendo (que puede no ser lo mismo), y qué limitaciones de tiempo tienen. Pueden llegar a decir: “Cualquier cosa”. pero incluso una breve conversación probablemente ayude a descubrir el hecho de que les gusta interactuar con las personas, pero prefiere no administrar las finanzas del grupo o viceversa.

Proporcionar muchas formas de contribuir.

Cuantas más formas haya para que las personas ayuden, más personas podrán hacerlo. Alguien a quien no le gusta estar frente a una audiencia puede mantener el sitio web de su organización, manejar sus cuentas o corregir las lecciones.

Reconoce las contribuciones.

A todos y todas nos gusta que nos aprecien, así que las comunidades deben reconocer las contribuciones de sus miembros, tanto en público como en privado, mencionándolas en presentaciones, poniéndolas en el sitio web, etc. Cada hora que alguien le haya dado a tu proyecto puede ser una hora quitada de su vida personal o de su empleo oficial; reconoce ese hecho y deja en claro que, si bien más horas serían bienvenidas, no esperas que hagan sacrificios insostenibles.

Haz espacio.

Crees que estás siendo útil, pero intervenir en cada decisión priva a las personas de su autonomía, lo que reduce su motivación (Section 10). En particular, si siempre eres quien reponde primero a correos electrónicos o mensajes de chat, las personas tienen menos oportunidades de crecer como miembros y crear colaboraciones horizontales. Como resultado, la comunidad continuará centrada en una o dos personas en lugar de convertirse en una red altamente conectada en la que otros y otras se sientan cómodos participando.

Otra forma de recompensar la participación es ofrecer capacitación. Las organizaciones necesitan presupuestos, propuestas de subvenciones y resolución de disputas. A la mayoría de las personas nunca se les enseña cómo hacer estas tareas más de lo que se les enseña a enseñar, así que la oportunidad de adquirir habilidades transferibles es una razón poderosa para que las personas se involucren y se mantengan involucradas. Si vas a hacer esto, no intentes proporcionar la capacitación tú mismo a menos que sea en lo que te especialices. Muchos grupos cívicos y comunitarios tienen programas de este tipo y probablemente puedas llegar a un acuerdo con alguno de ellos.

Finalmente, mientras que las personas voluntarias pueden hacer mucho, tareas como la administración del sistema y la contabilidad eventualmente necesitan personal remunerado. Cuando llegue a este punto, no pagues nada paga un salario adecuado. Si no les paga nada, su verdadera recompensa es la satisfacción de hacer el bien; por otro lado, si les pagas una cantidad simbólica, le quitas esa satisfacción sin darles la posibilidad de ganarse la vida.

Gobernanza

Cada organización tiene una estructura de poder: la única pregunta es si es formal y explicable o informal y, por lo tanto, inexplicable [Free1972]. Esta última forma, en realidad, funciona bastante bien para grupos de hasta media docena de personas en las que todos y todas se conocen. Más allá de esa cantidad, necesitas reglas para explicar quién tiene la autoridad para tomar qué decisiones y cómo lograr consenso (Section 20.1).

El modelo de gobierno que prefiero se denomina commons (como en un bien común), donde la administración se realiza conjuntamente por la comunidad, de acuerdo con las reglas que ella misma ha desarrollado y adoptado [Ostr2015]. Como subraya  [Boll2014], las tres partes de esa definición son esenciales: commons no es solo una pastura compartida o un conjunto de bibliotecas de software, sino que también incluye a la comunidad que lo comparte y las reglas que usan para hacerlo.

Los modelos más populares son las corporaciones con fines de lucro y las organizaciones sin fines de lucro; la mecánica varía de una jurisdicción a otra, por lo que debes buscar asesoramiento antes de elegir340. Ambos tipos de organización tienen la máxima autoridad en su junta o directorio. En términos generales, se trata de un directorio o junta de servicio cuyos miembros también asumen otras funciones en la organización o un directorio o junta cuya responsabilidad principal es contratar, supervisar y, si es necesario, despedir al director. Los miembros de la junta pueden ser elegidos por la comunidad o nombrados; en cualquier caso, es importante priorizar la capacidad sobre la pasión (la última es más importante para la base de la organización) y tratar de reclutar habilidades particulares como contabilidad, marketing, etc.

Elige la democracia

Cuando llegue el momento, haz de tu organización una democracia: tarde o temprano (generalmente más temprano que tarde), cada junta designada se convierte en una sociedad de mutuo acuerdo. Darle poder a sus miembros es complicado, pero es la única forma inventada hasta ahora para garantizar que las organizaciones continúen satisfaciendo las necesidades reales de las personas.

Cuídate

El Síndrome de desgaste ocupacional (bournout en inglés) es un riesgo crónico en cualquier actividad comunitaria [Pign2016], así que aprende a decir no más seguido de lo que dices sí. Si no te cuidas, no podrás cuidar a tu comunidad.

Quedándose sin “No”

Investigaciones en la década de 1990 parecían mostrar que nuestra capacidad de ejercer fuerza de voluntad es finita: si tenemos que resistirnos a comer la última dona en la caja cuando tenemos hambre, somos menos propensos a doblar la ropa y viceversa. Este fenómeno se llama agotamiento del ego, y si bien los estudios recientes no han podido replicar esos primeros resultados [Hagg2016], decir “sí” cuando estamos demasiado cansados para decir “no” es una trampa en la que caen muchos organizadores.

Una forma de asegurarte de cumplir con tu “no” es escribir una lista de cosas que vale la pena hacer pero que no vas a hacer. Al momento de escribir este libro, mi lista incluye cuatro libros, dos proyectos de software, el rediseño de mi sitio web personal, y aprender a tocar el silbato.

Finalmente, recuerda de vez en cuando que eventualmente toda organización necesita ideas y liderazgo nuevos. Cuando llegue ese momento, entrena a tus sucesores y continúa con la mayor gracia posible. Indudablemente harán cosas que tú no harías, pero pocas cosas en la vida son tan satisfactorias como ver algo que ayudaste a construir adquiere vida propia. Celebra eso — no tendrás ningún problema para encontrar otra cosa que te mantenga ocupado/a.

Ejercicios

Varios de estos ejercicios se toman de [Brow2007].

¿Qué tipo de comunidad? (individual/15)

Vuelve a leer la descripción de los cuatro tipos de comunidades. y decide cuál/cuales es o aspira ser su grupo.

Personas que puedes conocer (grupos pequeños/30)

Como organizador/a, a veces, parte de tu trabajo, es ayudar a las personas a encontrar una manera de contribuir a pesar de sí mismas. En pequeños grupos, elige tres de las personas a continuación y discute cómo los ayudarías a convertirse en una mejor contribuyente para tu organización.

Ana

sabe más sobre cada tema que todas las demás personas juntas — al menos, ella cree que lo hace. No importa lo que digas, ella te corregirá; no importa lo que sepas, ella lo sabe mejor.

Catalina

tiene tan poca confianza en su propia habilidad que no tomará ninguna decisión, sin importar que tan pequeña sea, hasta que no haya consultado con alguien más.

Fernando

disfruta saber cosas que otras personas no saben. Puede hacer milagros, pero cuando se le pregunta cómo lo hizo, sonreirá y dirá: “Oh, estoy seguro de que puedes resolverlo”.

Andrea

es tranquila. Nunca habla en las reuniones, incluso cuando sabe que otras personas están equivocadas. Podría contribuir a la lista de correo, pero es muy sensible a las críticas y siempre retrocede en lugar de defender su punto.

René

aprovecha el hecho de que la mayoría de las personas preferiría hacer su parte del trabajo que quejarse de él. Lo frustrante es que es tan plausible cuando alguien finalmente lo confronta. ‘Ha habido errores en todos lados,’’ dice él, o, “Bueno, creo que estás siendo un poco quisquilloso.”.

Melisa

tiene buenas intenciones pero de alguna manera siempre surge algo y sus tareas nunca terminan hasta el último momento posible. Por supuesto, eso significa que todos los que dependen de ella no pueden hacer su trabajo hasta después del último momento posible

Roberto

Es grosero. “Así es la forma en que hablo”, dice. “Si no puedes hackearlo, ve a buscar otro equipo”. Su frase favorita es: “Eso es estúpido”. y usa una obscenidad en cada segunda oración.

Valores (grupos pequeños/45)

Responde estas preguntas por tu cuenta y luego compara tus respuestas con las de los demás.

  1. ¿Cuáles son los valores que expresa tu organización?

  2. ¿Son estos los valores que deseas que la organización exprese?

  3. Si tu respuesta es no, ¿qué valores te gustaría expresar?

  4. ¿Cuáles son los comportamientos específicos que demuestran esos valores?

  5. ¿Qué comportamientos demostrarían lo contrario de esos valores?

Procedimientos de reuniones (grupos pequeños/30)

Responde estas preguntas por tu cuenta y luego compara tus respuestas con las de los demás.

  1. ¿Cómo se llevan a cabo las reuniones?

  2. ¿Es así como quieres que se realicen tus reuniones?

  3. ¿Las reglas para ejecutar reuniones son explícitas o simplemente se asumen?

  4. ¿Estas son las reglas que quieres?

  5. ¿Quién es elegible para votar o tomar decisiones?

  6. ¿Son estas personas las que quieres que se le otorgue autoridad para tomar decisiones?

  7. ¿Utilizan la regla de la mayoría, toman decisiones por consenso u otra cosa?

  8. ¿Es así como quieres tomar decisiones?

  9. ¿Cómo saben las personas en una reunión cuándo se ha tomado una decisión?

  10. ¿Cómo saben las personas que no estuvieron en una reunión qué decisiones se tomaron?

  11. ¿Funciona esto para tu grupo?

Tamaño (grupos pequeños/20)

Responde estas preguntas por tu cuenta y luego compara tus respuestas con las de los demás.

  1. ¿Qué tan grande es tu grupo?

  2. ¿Es este el tamaño que deseas para su organización?

  3. Si tu repsuesta es no, ¿de qué tamaño te gustaría que fuera?

  4. ¿Tienes algún límite en cuanto a la cantidad de miembros?

  5. ¿Te beneficiarías de establecer ese límite?

Convertirse en miembro (grupos pequeños/45)

Responde estas preguntas por tu cuenta y luego compara tus respuestas con las de los demás.

  1. ¿Cómo se une alguien a tu gupo?

  2. ¿Qué tan bien funciona este proceso?

  3. ¿Hay cuotas de membresía?

  4. ¿Se requiere que las personas estén de acuerdo con alguna regla de comportamiento al unirse?

  5. ¿Son estas las reglas de comportamiento que quieres?

  6. ¿Cómo descubren las personas recien llegadas lo que hay que hacer?

  7. ¿Qué tan bien funciona este proceso?

Dotación de personal (grupos pequeños/30)

Responde estas preguntas por tu cuenta y luego compara tus respuestas con las de los demás.

  1. ¿Tiene personal pagado en su organización? o son todos y todas voluntarios?

  2. ¿Deberías tener personal pago?

  3. ¿Quieres / necesitas más o menos personal?

  4. ¿Qué hacen los miembros del personal?

  5. ¿Son estos los roles y funciones principales que necesitas que el personal desempeñe?

  6. ¿Quién supervisa a tu personal?

  7. ¿Es este el proceso de supervisión que quieres para tu grupo?

  8. ¿Cuánto le pagan a tu personal?

  9. ¿Es este el salario adecuado para realizar el trabajo necesario?

Dinero (grupos pequeños/30)

Responde estas preguntas por tu cuenta y luego compara tus respuestas con las de los demás.

  1. ¿Quién paga y por qué?

  2. Is this who you want to be paying? ¿Es a él a quien le quieres pagar? ¿Es por esto y a quien le quieres pagar?

  3. ¿De dónde sacas/obtienes tu dinero?

  4. ¿Es así como quieres obtener tu dinero?

  5. Si no, ¿tiene algún plan para hacerlo de otra manera?

  6. Si es así, ¿cuáles son esos planes?

  7. ¿Quién está siguiendo estos planes para asegurarse de que suceda?

  8. ¿Cuánto dinero tienes?

  9. ¿Cuánto dinero necesitas?

  10. ¿En qué gastas la mayor parte de tu dinero?

  11. ¿Es así como quieres gastar tu dinero?

Tomando ideas prestadas (toda la clase/15)

Muchas de mis ideas sobre cómo construir una comunidad han sido moldeadas por mi experiencia en el desarrollo de software de código abierto.[Foge2005] (que es disponible en línea) es una buena guía de lo que ha funcionado y lo que no ha funcionado para esas comunidades, y el sitio de guías de código abierto tiene una gran cantidad de información útil también. Elije una sección de este último recurso, como “Encontrar usuarios para su proyecto (Finding Users for Your Project en inglés)” o “Liderazgo y gobernanza (Leadership and Governance en inglés)” y presenta al grupo, en dos minutos, una idea que encontraste útil o con la que estuviste muy en desacuerdo.

¿Quién eres tú? (grupos pequeños/20)

La Administración Nacional Oceánica y Atmosférica (NOAA por sus siglas en inglés) tiene una guía breve, útil y divertida para lidiar con comportamientos disruptivos. Clasifica esos comportamientos bajo etiquetas como “hablador”, “indecisa” y “tímida”. y describe estrategias para manejar cada una. En grupos de 3 a 6 personas, lean la guía y decidan cuál de estas descripciones les queda mejor. ¿Crees que las estrategias descritas para manejar personas como tú son efectivas? ¿Son otras estrategias igualmente o más efectivas?

Creando lecciones entre todos y todas (grupos pequeños/30)

Una de las claves del éxito de the Carpentries es su énfasis en construir y mantener lecciones en forma colaborativa [Wils2016,Deve2018]. Trabajando en grupos de 3–4:

  1. Elije una lección breve que todos/as hayan usado.

  2. Haz una revisión cuidadosa para crear una única lista con sugerencias de mejoras.

  3. Ofrece esas sugerencias al autor de la lección.

¿Estás crujiente? (individual/10)

Johnathan Nightingale escribió:

Cuando trabajaba en Mozilla, utilizamos el término "crujiente" (crispy en inglés) para referirnos al estado justo antes de llegar al síndrome de desgaste ocupacional. Las personas que son crujientes no son divertidas. Son descorteses. Están ansiosos por una pelea que pueden ganar. Lloran sin mucha advertencia. reconoceríamos lo "crujiente" de nuestros colegas y nos cuidaríamos mutuamente [pero] es una cosa fea, que vimos tanto, que tuvimos un proceso cultural completo a su alrededor.

Responde “sí” o “no” a cada una de las siguientes preguntas. ¿Qué tan cerca estás de tener síndrome de desgaste ocupacional?

  • ¿Te has vuelto cínica/o o crítica/o en el trabajo?

  • ¿Tienes que arrastrarte al trabajo o tienes problemas para comenzar a trabajar?

  • ¿Te has vuelto irritable o impaciente con tus compañeros de trabajo?

  • ¿Te resulta difícil concentrarte?

  • ¿No logras satisfacción de tus logros?

  • ¿Estás usando comida, drogas o alcohol para sentirte mejor o simplemente no sentir?

Difusión

Está de moda en los círculos tecnológicos menospreciar a las universidades y las instituciones gubernamentales como si fueran dinosaurios lentos, pero en mi experiencia no son peores que empresas de tamaño similar. Tanto la junta del consejo escolar (NOTA:sé que existe algo como school board pero no se si cumple el mismo rol que el consejo escolar), la biblioteca o la oficina del concejal de la ciudad puede llegar a ofrecer espacio, fondos, publicidad, conexiones con otros grupos que todavía no hayas conocido y una gran cantidad de cosas útiles; conocerlas puede ayudar a resolver o evitar problemas en el corto plazo y generar beneficios en el futuro.

Marketing

Las personas con conocimientos académicos y técnicos muchas veces piensan que el marketing trata sobre confundir y engañar. En la realidad, trata sobre ver cosas desde la perspectiva de otras personas, comprendiendo sus deseos y necesidades, y explicando cómo puedes ayudarles—en pocas palabras, cómo enseñarles. Este capítulo analizará cómo usar ideas de los capítulos anteriores para lograr que las personas entiendan y apoyen lo que estás haciendo.

El primer paso es averiguar qué es lo que le ofreces a quién, es decir, lo que realmente atrae a los voluntarios, a los fondos y otro tipo de apoyo que puedas necesitar para continuar. La respuesta generalmente es contraintuitiva. Por ejemplo, la mayoría de los científicos creen que sus productos son sus artículos, cuando en realidad son sus propuestas de subsidios, ya que son éstos los que atraen el dinero del subsidio [Kuch2011]. Los artículos son la publicidad que convence a otras personas para otorgarle fondos a las propuestas, así como ahora los álbumes son los que convencen a la gente a comprar tickets para el show y remeras del músico que van a ver.

Suponiendo que tu grupo ofrece talleres de programación de fin de semana a personas que están re-insertándose en la fuerza de trabajo después de haber estado alejadas por varios años. Si los asistentes pueden pagar lo suficiente para cubrir tus costos, entonces son tus clientes y el taller es tu producto. Si por otro lado, los talleres son gratuitos o los estudiantes solo pagan un monto simbólico para reducir la tasa de ausencias, entonces tu producto real puede ser una mezcla de:

  • tus proyectos de subsidio;

  • los antiguos alumnos de tus talleres a los que las empresas que te patrocinaron quisieran contratar;

  • el resumen de media página de tus talleres en el balance anual de tu intendente/alcalde al concejo deliberante que muestra cómo ella apoya el sector tecnológico local;

  • la satisfacción personal que obtienen los voluntarios cuando enseñan

Tal como con el diseño de lecciones (Chapter 6), los primeros pasos en marketing son crear el equivalente a estudiantes tipo, gente que podría estar interesada en lo que estás haciendo y averiguar cuáles de sus necesidades puedes satisfacer. Una manera de resumir lo último es escribir discursos de presentación dirigido a diferentes personas. Una plantilla muy usada para esto es:

Para objetivo de audiencia
quien insatisfacción con lo que se encuentra actualmente disponible
nosotros categoría
provee beneficio clave.
a diferencia de alternativas
nuestro programa característica distintiva clave

Continuando con el ejemplo del taller de fin de semana, se podría usar este discurso para los participantes

quienes tienen todavía responsabilidades familiares , nuestros talleres introductorios de programación provee clases en fin de semana con guardería incluida. A diferencia de emphclases en línea, nuestro programa le da a la gente la oportunidad de conocer a otras personas en el misma etapa de la vida.

y ésta otra para los tomadores de decisiones en las empresas que podrían patrocinar los talleres:

Paraempresas que quieren reclutar desarrolladores de software de nivel básico que tienen dificultades para encontrar candidatos con suficiente madurez de diversas formaciones nuestros talleres introductorios de programación proveen depotenciales reclutas. A diferencia de feria de reclutamiento universitario, nuestro programa conecta empresas con una gran variedad de candidato/ass.

Si no sabes por qué diferentes potenciales accionistas pueden estar interesados en los que haces, preguntales. Si lo sabes, preguntales igual: las respuestas cambian con el tiempo, y pueden descubrir cosas que no habías notado antes

Una vez que tienes estas discursos, estos deberían llevarlos a lo que se publique en el sitio web y el material de difusión para ayudar a la gente para ayudar a las personas a descubrir lo más rápido posible si tu y ellos tienen algo de qué hablar ( No deberías copiarlos textualmente, aunque: muchas personas en tecnología han visto esta plantilla tan seguido que sus ojos se podrán vidriosos si la vuelven a encontrar.)

Mientras escribes estos discursos recuerda que hay varias razones para aprender cómo programar (Section 1.4). Una sensación de logro, control sobre sus propias vidas, y ser parte de una comunidad puede motivar a las personas más que el dinero (Chapter 10). Podrían ofrecerse como voluntarios para enseñar contigo porque sus amigos lo están haciendo; de igual manera, una empresa puede decir que está patrocinando clases para estudiantes de secundaria que estén desfavorecidos económicamente porque quieren tener un grupo más grande de empleados potenciales en el futuro, pero el CEO podría realmente estar haciéndolo simplemente porque es lo correcto.

Branding y Posicionamiento

Una marca es la primera reacción de una persona a la mención de un producto; Si ante la primera reacción de la mención de un producto; la reacción es “¿Que es eso?”, uno (todavía) no tiene una marca. “ Branding” es importante porque la gente no va a ayudar en algo que no conocen o no le interesa/importa.

La mayor parte de la discusión actual sobre “branding” se enfoca en como crear conciencia en línea. Listas de correo, blogs, y Twitter, todos generan maneras de llegar a la gente, pero así como el volumen de desinformación aumenta, la gente le presta menos atención a cada interrupción individual.

Esto hace que el posicionamiento sea más importante. Algunas veces llamado “diferenciación”, es lo que distingue a tu oferta de las otras, la sección “a diferencia de” de tu “discurso de presentación”. Cuando te comunicas con personas que están familiarizadas en tu campo, debes enfatizar o hacer hincapié en tu posicionamiento, ya que es eso lo que va a llamar su atención.

Existen otras cosas que puedes hacer para construir tu “marca” Una de ellas es usar accesorios como un robot que uno de los estudiantes hizo a partir de restos que encontró en su casa [Schw2013] o el sitio web que otro estudiante hizo para el de geriatrico sus padres.

Otra opción es hacer un video corto– de no más de un par de minutos de duración– que resalte los antecedentes y logros de tus estudiantes. El objetivo es tanto contar una historia: mientras que la gente siempre pide datos, creen y recuerdan historias.

Mitos Fundacionales

Una de las historias más convincentes que una persona o grupo puede contar es por qué y cómo comenzaron. ¿Estás enseñando algo que quisieras que alguien te hubiera enseñado pero no lo hizo? ¿Había una persona en particular a la que quisieras ayudar, y eso abrió las compuertas?

Si no hay una sección en tu sitio web que comience con “Había una vez,” piensa en agregar una.

Un paso crucial es lograr que tu organización pueda ser encontrada en las búsquedas en línea .[DiSa2014b] descubrió que los términos de búsqueda que los padres usan para las clases de computación fuera de la escuela de sus hijos en realidad no encontraban esas clases, y muchos otros grupos se enfrentan a desafíos similares. Hay mucho “folklore “ sobre cómo hacer que las cosas puedan ser halladas en internet ( también conocido comoMotor de optimización de posicionamiento en buscadores or SEO); dados los poderes cuasi-monopólicos y falta de transparencia de Google, la mayor parte de esto se reduce a tratar de estar un paso adelante de los algoritmos diseñados para alejar a las personas clasifiquen en rankings basados en juegos.

A menos que estés muy bien financiado, los mejor que puedes hacer es buscarte y buscar a tu organización frecuentemente y ver que surge, lee entonces estas guías y haz lo que puedas para mejorar tu sitio A menos que tengas muchos fondos, Ten en menteesta viñeta de XKCD : la gente no quiere saber sobre el organigrama o un paseo virtual por el sitio– ellos necesitan tu dirección, información sobre donde hay estacionamiento cerca, y alguna idea sobre que enseñas, cuando lo enseñas, y cómo va a cambiar sus vidas.

No todo el mundo vive en línea

Estos ejemplos asumen que la gente tiene acceso a internet y que los grupos tienen dinero, materiales, tiempo libre, y/o habilidades técnicas. La mayoría no tiene– de hecho, aquellos que trabajan con grupos económicamente desfavorecidos muy probablemente no los tienen. (Como Rosario Robinson dice, “Gratis funciona para aquellos para quien pueden darse lujo de que sea gratuito.”) Historias son más importantes que el programa del curso en estas situaciones porque son más fáciles de volver a contar. De manera similar, si las personas que deseas o esperas alcanzar no están tan en línea como tú, debes entonces prestar atención a los avisos en las carteleras de las escuelas, en bibliotecas locales, centro de acogida, y almacenes puede ser el camino más efectivo para alcanzarlos.

El arte de las llamadas en frío

Crear un sitio web y desear que las personas lo encuentren es fácil; llamar por teléfono o golpear en las puertas de sus casa sin ningún tipo de introducción previa es más difícil. Al igual que pararse y enseñar, sin embargo, es un oficio que puede aprenderse. Aquí hay diez reglas simples para convencer a las personas:

1: No o no lo hagas

Si tienes que convencer a alguien de algo, lo más probable es que realmente no quieran hacerlo. Respeta que: es casi siempre mejor en el largo plazo dejar alguna cosa en particular sin terminar que usar culpa u otro truco psicológico inescrupuloso que solo generaría resentimiento.

2: Se amable.

No se si realmente existe un libro llamado Trucos secretos de los maestros de ventas Ninja, pero si existe, probablemente le dice a sus lectores que hacer algo por un potencial cliente crea un sentido de obligación, lo que a su vez aumenta las probabilidades de una venta. Esto puede funcionar, pero solo funciona una vez y es una cosa medio extraña para hacer. Por otra parte, si eres genuinamente amable y ayudas a otras personas porque eso es lo que las buenas personas hacen, puedes tal vez inspirarlos/as a ser buenas personas también.

3: Apela al bien mayor.

Si les hablas abiertamente sobre que hay disponible ellos/as, estás indicando que deberían pensar en su interacción contigo como un si fuera un intercambio comercial de valor para negociar que debe negociarse. En su lugar, empieza por explicar cómo lo que sea que queremos que nos ayude puede hacer del mundo un lugar mejor, y en dilo en serio. Si lo que estás proponiendo no va a hacer del mundo un lugar mejor, propone algo mejor.

4: Comienza desde algo pequeño.

Es comprensible que la mayoría de las personas se muestren reacias a sumergirse de lleno en las cosas, así que debes darle la oportunidad de probar las aguas y conocerte a ti y a todos los demás involucrados en lo que sea en que necesites ayuda. No te sorprendas o decepciones si es así como las cosas terminaron: todo el mundo está ocupado o cansado o tiene proyectos propios, o tal vez tienen un modelo mental de cómo las colaboraciones deberían funcionar. Recuerda la regla 90-9-1– el 90% va a mirar, el 9% va a hablar y el 1% realmente va hacer cosas– ajusta tus expectativas de modo acorde.

5: No crea un proyecto: crea una comunidad.

Solía pertenecer a un equipo de baseball que nunca realmente jugaba al baseball: nuestros “ juegos o partidos” eran solo una excusa para pasar tiempo juntos y disfrutar la compañía del otro. Probablemente no quieres llegar tan lejos, pero compartir una taza de té con alguien o celebrar el cumpleaños de su primer/a nieto/a pueden darte cosas que ninguna cantidad de dinero pueden dar

6: Establece un punto de conexión.

“Estaba hablando con x” o “Nos conocimos en Y” les da contexto, lo que a su vez los hace sentir más cómodos. Esto debe ser específico: quienes envían correo basura y empresas que llaman por teléfono constantemente nos han entrenado para ignorar cualquier cosa que comience con la frase “ Hace poco tiempo encontré tu sitio web

7: Sé específico sobre lo que estás pidiendo.

Las personas necesitan saber esto para que puedan determinar si el tiempo y las habilidades que tienen coinciden con lo que necesitas. Ser realista desde el principio también es una señal de respeto: si le dices a la gente que necesitas una mano para mover algunas cajas cuando en realidad estás mudando una casa entera, probablemente no te ayudarán por segunda vez.

8: Establece tu credibilidad.

Menciona a tus patrocinadores, tu tamaño, cuanto tiempo tu grupo ha existido, o algo que hayas logrado en el pasado para que ello/as crea que vale la pena tomarte en serio

9: Crea una ligera sensación de urgencia.

“Esperamos lanzar esto en la primavera” es mucho más probable que genere una respuesta positiva que “Eventualmente queremos lanzar esto.” Sin embargo la palabra “ligera” es importante: si tu pedido es urgente, la mayoría de las personas asumen que eres una persona desorganizadas o que algo ha salido mal y pueden errar por ser prudentes.

10: Entiende la indirecta.

Si la primera persona a la que le pides ayuda dice no, pregúntale a otra. Si la quinta o décima persona dice no, debes preguntarte si los que estás tratando de hacer tiene sentido y vale la pena hacerse

Esta plantilla de correo electrónico sigue todas estas reglas. Ha funcionado bastante bien: hallamos que cerca de la mitad de los correos son respondidos, y aproximadamente la mitad de estos querían hablar más, y la mitad des estos últimos condujeron a talleres, lo que significa que 10-15% de los correos electrónicos objetivos resultaron en talleres. Esto puede ser bastante desmoralizante si no te encuentras acostumbrado a esto, pero es mucho mejor que la tasa de respuesta de entre 2–3% que la mayoría de las organizaciones esperan con llamadas imprevistas.

Hola NOMBRE

Espero que no te moleste que escriba repentinamente, pero quería continuar con nuestra conversación en LUGAR DE REUNIÓN para ver si estarían interesados en que nosotros/as hiciéramos un taller para entrenamiento de docentes– estamos programando la próxima tanda durante las próximas dos semanas.

Este taller de un día le enseñará a tus voluntarios una serie de prácticas útiles de enseñanza basadas en evidencia. Se ha impartido más de cien veces de diversas maneras en seis continentes para organizaciones sin fines de lucro, bibliotecas y empresas, y todo el material disponible gratuitamente en línea en http://teachtogether.tech.

El temario incluye:

  • estudiantes tipo

  • diferencias entre diferentes tipos de estudiantes

  • uso de evaluaciones formativas para diagnosticar malentendidos

  • teaching as a performance art enseñanza como un arte performativa

  • que motiva y desmotiva a estudiantes adultos

  • la importancia de la inclusividad y como ser un buen aliado

Si esto te resulta interesante, por favor avísame– Sería muy bienvenida la oportunidad de hablar de modos y medios para hacerlo. Gracias, NOMBRE

Referencias

Construir alianzas con otros grupos que hacen cosas relacionadas a tu trabajo vale la pena de muchas maneras. Una de ellas son las referencias: si alguien se te aproxima en busca de ayuda sería mejor atendido por alguna otra organización, tomate un momento para hacer una introducción Si ya has hecho esto varias veces agrega algo a tu sitio web que pueda ayudar a la próxima persona a encontrar lo que necesita. Las organizaciones a las que estás ayudando pronto empezaran a ayudarte a cambio.

Todo el mundo tiene miedo a lo desconocido y a pasar vergüenza frente a otros/as En consecuencia, la mayoría de la gente prefiere fracasar que cambiar. Por ejemplo, Lauren Herckis investigó por que el profesorado universitario no adopta mejores metodos de enseñanza. Ella halló que la razón principal es el miedo a parecer estúpido/a frente a los estudiantes; las razones secundarias fueron preocupación porque los inevitables contrastes que haya en el cambio de los métodos de enseñanza puedan afectar las evaluaciones del curso (que en consecuencia afectan las promoción o los cargos estables/titulares) y el deseo de la gente de seguir imitando a los profesores/maestros que los han inspirado.

No tiene sentido discutir si estos problemas son “reales” o no: el profesorado cree que son reales, así que cualquier plan para trabajar con el profesorado necesita referirse a ellos349.

Ellos/as preguntaron y respondieron tres preguntas claves:

¿Como el profesorado se entera sobre nuevas prácticas de enseñanza?

Buscan intencionalmente nuevas prácticas porque están motivados a resolver un problema ( en particular, la participación de los estudiantes), se tornan conscientes a través de iniciativas deliberadas por parte de sus instituciones, las copian o replican de sus colegas, o las obtienen por interacciones esperadas e inesperadas en conferencias (relacionadas a la enseñanza o de otro tipo).

¿Por qué las prueban?

Algunas veces por incentivos institucionales ( por ejemplo innovan para mejorar sus chances de promoción), pero hay a veces tensión en instituciones de investigación donde la retórica sobre la importancia de la enseñanza tiene poca credibilidad Otra razón importante es su propio análisis costo/beneficio: ¿La innovación es la que les va a ahorrar tiempo? Una tercera razón es que se inspiran en modelos a seguir– otra vez, esto afecta en gran medida las innovaciones que tienen como objetivo mejorar la motivación y participación más que los resultados en el aprendizaje – y un cuarto factor son fuentes confiables o de confianza, por ejemplo personas que han conocido en congresos o conferencias que se encuentran en la misma situación que ellos/as y reportaron aprobación exitosa. Pero el profesorado tiene preocupaciones que no siempre son abordadas por el grupo de personas que abogan por modificaciones. La primera era la ley de Glass: cualquier nueva herramienta o práctica inicialmente te ralentiza o te vuelve más lento, entonces mientras que las nuevas prácticas pueden hacer la enseñanza más efectivo en el largo plazo, son costosas en el corto plazo. Otro es que la distribución física de las aulas torna difíciles a muchas nuevas prácticas: por ejemplo, los grupos de discusión no funcionan bien en modo de asientos estilo teatro.

Pero el resultado más revelador fue éste: “ A pesar de que ellos mismo son investigadores, el profesorado en ciencias de la computación con el que hablamos en su mayoría no creía que resultados sobre estudios educacionales fueran razones creíbles suficiente para probar prácticas de enseñanza.” Esto es consistente con otros hallazgos: incluso personas cuyas carreras están dedicadas a la investigación a menudo ignoran investigaciones en educación.

¿Por qué las siguen usando?

Como [Bark2015] dice, “Las devoluciones de los estudiantes son vitales,” y son normalmente las razones más fuertes para continuar usando una práctica, (aunque la asistencia a clases es un buen indicador de participación). aunque sabemos que las auto-evaluaciones no correlacionan fuertemente con los resultados del aprendizaje [Star2014,Uttl2017] Otro motivo para retener alguna práctica es requerimiento institucional, aunque si esta es la única motivación, las personas abandonaran la práctica cuando el incentivo explícito o el monitoreo desaparecen.

La buena noticia es que puedes abordar estos problemas sistemáticamente.[Baue2015] observó la adopción de nuevas tecnicas medicas dentro de la Administración de Veteranos de Estados Unidos. Hallaron que prácticas basadas en evidencia en medicina They found that evidence-based practices in medicine toman en promedio 17 años en ser incorporadas en prácticas generales de rutina, y que solo la mitad de estas prácticas llegan a ser ampliamente adoptadas. Este deprimente hallazgo y otros han estimulado el crecimiento de implementation science, que es el estudio de cómo lograr que la gente adopte mejores prácticas.

Como el Chapter 13 decía, el punto de partida es hallar qué es lo que creen que necesitan las personas que quieres ayudar. Por ejemplo,[Yada2016] resumen los comentarios de los maestros de escolarizaciones primaria y secundaria en la preparación y apoyo que quieren. Aunque puede no ser aplicable a todos los entornos, tomar una taza de té con unas pocas personas y escucharlas antes de hablar hace un mundo de diferencia en su voluntad de intentar algo nuevo.

Una vez que sabes que es lo que la gente necesita, el siguiente paso es hacer cambios de manera incremental, dentro de los propios esquemas o entornos de las instituciones.[Nara2018] describe un programa intensivo de tres años de bachillerato/licenciatura basado en cohortes muy unidas y apoyo administrativo que triplicó las tasas de graduación, mientras que [Hu2017] describe el impacto de implementar un programa de certificaciones de seis meses para profesores de secundaria que quieran enseñar computación. El número de maestros de computación se ha mantenido estable entre 2007 to 2013, pero se cuadruplicó después de la introducción de un nuevo programa de certificación sin disminuir la calidad: los maestros que eran novatos en impartir computación parecían ser tan efectivos en el curso introductorio como maestros con más entrenamiento en computación.

De modo más amplio,[Borr2014] categoriza maneras para lograr que ocurran cambios en educación superior. Las categorías están definidas por si el cambio es individual o sistémico y si está prescripto (de arriba hacia abajo) o emergente(de abajo hacia arriba) La persona que trata de hacer los cambios ( y hacer que duren) tiene un rol distinto en cada situación, y de manera acorde debe seguir diferentes estrategias. El artículo continúa explicando en detalle cada uno de los métodos, mientras que [Hend2015a,Hend2015b] presenta las mismas ideas en una forma más procesable.

Desde el exterior o viniendo desde afuera, probablemente en principio caigas en alguna de las categorías Individuo/Emergente, dado que te aproximarás a los maestros uno a uno y tratar de lograr que los cambios ocurran de abajo hacia arriba. Si este es el caso, las estrategias Borrego y Henderson recomiendan centrar alrededor de tener maestros que reflexionan en su enseñanza de manera individual o en grupos. Hacer live coding para mostrarles lo que haces o los ejemplos que usas, y deja que tengan su turno para hacer programación en vivo para mostrar cómo usarían esas ideas y técnicas en su escenario, les da a todos/as la oportunidad de captar cosas que les seran útiles en su contexto.

Docentes de rango libre

Las escuelas y las universidades no son los únicos lugares en donde la gente las personas pueden ir a aprender programación; en los últimos años, un número creciente a turned a talleres de rango libre y programas intensivos. Estos últimos típicamente duran uno a seis meses, run by grupos de voluntarios o por empresas con fines de lucro, y su objetivo son personas que se están re-entrenando para entrar en tecnología. Algunos son de muy alta calidad, pero otros existen primariamente para separar personas de su dinero [McMi2017].

[Thay2017] entrevistó a 26 graduados de estos entrenamientos intensivos que proveen de una segunda oportunidad para aquellos que no tuvieron antes oportunidades de educación en computación ( aunque expresarlo de este modo realizar ciertas grandes suposiciones cuando se refiere a personas de grupos poco representados). Los participantes de los entrenamientos intensivos enfrentan grandes costos y riesgos personales: deben pasar una cantidad significativa de tiempo, dinero y esfuerzo antes durante y después de los entrenamientos intensivos, y cambiar de carreras puede tomar un año o más. Varios de los entrevistados sienten que sus certificados fueron mal vistos por sus empleadores; como dicen algunos, obtener un trabajo significa aprobar una entrevista, pero dado que los entrevistadores muchas veces no comparten sus motivos para rechazar, es difícil saber que arreglar o que más aprender. Muchos/as han tenido que recurrir a pasantías (pagas o de otro tipo) y pasan mucho tiempo construyendo sus portfolios y haciendo networking. Las tres barreras informales que más fácilmente identificables son jerga, síndrome del impostor, y una sensación de no encajar.

[Burk2018] profundizó en esto comparando las habilidades y credenciales que los reclutadores de la industria tecnológica buscan entre entrenamientos intensivos y grados/diplomas/carreras de cuatro años. Basándose en entrevistas con 15 gerentes de contratación de empresas de varios tamaños y algunos grupos focales, encontraron que los reclutadores enfatizaban uniformemente en habilidades “blandas” (especialmente trabajo en equipo, comunicación y la habilidad para continuar aprendiendo) Muchas compañías requieren un título de cuatro años (aunque no necesariamente en informática), pero muchos también elogiaron a los graduados de entrenamientos intensivos por ser mayores en edad o más maduros y tener un conocimiento más actualizado.

Si te está aproximas a un entrenamiento intensivo existente, tu mejor estrategia podría ser enfatizar lo que sabes sobre enseñanza en lugar de lo que sabes sobre tecnología, dado que muchos de sus fundadores y personal tienen experiencia en programación pero poca o ninguna capacitación en educación. Los primeros capítulos de este libro en el pasado han servido bien con esta audiencia, y  [Lang2016] describe prácticas de enseñanza basadas en evidencia que pueden implementarse con mínimo esfuerzo y a bajo costo. éstas tal vez no tengan el mayor impacto, pero lograr algunas victorias tempranas ayuda a generar apoyo para esfuerzos más grandes.

Reflexiones Finales

Es imposible cambiar grandes instituciones por tu propia cuenta: necesitas aliados y para conseguir aliados, necesitas tácticas. La guía más útil que he encontrado es [Mann2015], que cataloga más de cuatro docenas de estas tácticas y las organiza de acuerdo a si se implementan mejor temprano, luego, a lo largo del ciclo de cambio, o cuando encuentras resistencia. Algunos de sus patrones incluyen:

En tu espacio:

Mantén la nueva idea visible ubicando recordatorios a lo largo de la organización.

Símbolo o recuerdo:

Para mantener viva una nueva idea en la memoria de una persona, entregue un recuerdo (COMENTARIO esto es token, no es un souvenir) que puedan identificarse con el tema que se está introduciendo.

Campeón escéptico:

Pregunte a los líderes con opiniones fuertes que sean escépticos de la nueva idea. para desempeñar el papel/rol de “escéptico oficial ”. Usa sus comentarios para mejorar tu esfuerzo, incluso si no logras cambiar su opinión.

Compromiso Futuro:

Si puedes anticipar algunas de sus necesidades, puedes pedir un compromiso futuro a las personas más ocupadas. Si se les da un tiempo de entrega, pueden estar más dispuestos a ayudar.

La estrategia más importante es estar dispuesto a cambiar tus metas según lo que aprendas de las personas a las que intentas ayudar. Tutoriales que les muestran cómo usar una hoja de cálculo podría ayudarlos de manera más rápida y confiable que una introducción a JavaScript. A menudo he cometido el error de confundir cosas que me apasionaban con cosas que las otras personas deberían saber; si realmente quieres ser quien acompañe, recuerda siempre que el aprendizaje y el cambio tienen que ir en ambos sentidos.

La parte más difícil de construir relaciones es comenzarlas. Reserva una o dos horas cada mes para encontrar aliados y mantener tus relaciones con ellos. Una forma de hacer esto es pedirles consejo: ¿Cómo creen que deberías crear conciencia de lo que están haciendo? ¿Dónde han encontrado espacio para dar clases? ¿Qué necesidades creen que no se están cumpliendo y serías capaz de cumplir? Cualquier grupo que haya existido durante algunos años tendrá consejos útiles; también se sentirán halagados de que se les haya consultado, y sabrán quién eres la próxima vez que llames.

Y como [Kuch2011] decía, si no puedes ser el primero/a en una categoría, intenta crear una nueva categoría en la que sí puedas ser el primero/a. Si no puedes hacer eso, únete a un grupo existente o piensa en hacer algo completamente diferente. Esto no es derrotista: si alguien más ya está haciendo lo que tienes en mente, deberías incorporarte o abordar una de las otras cosas igualmente útiles que podrías estar haciendo en su lugar.

Ejercicios

Discurso de presentación para un/a concejal (individual/10)

Este capítulo describe una organización que ofrece talleres de programación de fin de semana para personas que re-ingresan a la fuerza laboral. Escribir un discurso de presentación para esa organización dirigido a un concejal de la ciudad cuyo apoyo las organización necesita.

Presenta tu Organización (individual/30)

Identifica dos grupos de personas de la que tu organización necesite apoyo y escribe un discurso de presentación dirigido a cada uno/a.

Adjuntos de correo electrónico (pares/parejas/10)

Escriban las líneas de asunto (y solo las líneas de asunto) para tres mensajes de correo electrónico: uno anunciando un nuevo curso, uno anunciando un nuevo patrocinador, y uno que anuncia un cambio en el liderazgo del proyecto. Compare sus líneas de asunto con las de un compañero/a y vea si pueden combinar las mejores características de cada una mientras que también las acortan.

Manejando la Resistencia Pasiva (grupos pequeños/30)

Las personas que no quieren cambios a veces lo dicen en voz alta: pero a menudo también pueden usar varias formas de resistencia pasiva, como simplemente no lidiar con ello una y otra vez, o planteando un posible problema tras otro para hacer que el cambio parezca más arriesgado y más costoso de lo que probablemente es  cite Scot1987. Trabajando en grupos pequeños, enumere tres o cuatro razones por las cuales las personas podrían no querer que su iniciativa de enseñanza siga adelante, y explique qué puede hacer con el tiempo y los recursos que tiene para contrarrestar cada una de esas razones.

Por que/para que aprender a programar? (individual/15)

Revise el ejercicio “¿Por qué aprender a programar?” En  secref s: intro-exercise. ¿Dónde se alinean sus razones para enseñar y las razones de sus alumnos para aprender? ¿y donde no? ¿Cómo afecta eso a su comercialización?

Conversational Programmers (pensar en parejas y compartir/15)

Un/a programador/a conversacional es alguien que necesita saber lo suficiente sobre informática para tener una conversación valiosa con un programador, pero ellos mismos no van a programar.

[Wang2018] descubrió que la mayoría de los recursos de aprendizaje no abordan las necesidades de este grupo. Trabajando en parejas/pares, escriban un discurso para un taller de medio día destinado a ayudar a las personas que se ajustan a esta descripción y luego comparte el discurso de tu pareja con el resto de la clase.

Colaboraciones (grupos pequeños/30)

Responda por su cuenta las siguientes preguntas, luego compare sus respuestas con las dadas por otros miembros de su grupo.

  1. ¿Tiene algún acuerdo o relación con otros grupos?

  2. ¿Quieres tener relaciones con algún otro grupo?

  3. ¿Cómo tener (o no tener) colaboraciones podría ayudar a alcanzar sus objetivos?

  4. ¿Cuáles son sus relaciones colaborativas clave?

  5. ¿Son estos los colaboradores adecuados o indicados para alcanzar sus objetivos?

  6. ¿Qué grupos o entidades quisieras que tu organización tenga acuerdos o lazos ?

Educacionalización (toda la clase/10)

[Laba2008] explora porque en los Estados Unidos y en otros países siguen empujando la solución de problemas sociales hacia las instituciones educativas y eso sigue sin funcionar. él remarca, “[Educación] ha hecho muy poco para promover igualdad de raza, clase, y género; para mejorar la salud pública, la productividad económica y buena ciudadanía; o reducir el sexo en adolescente, las muestras por accidentes de tránsito, obesidad y la destrucción ambiental. De hecho, de muchas maneras ha tenido un efecto negativo en estos problemas sacando dinero y energía de las reformas sociales que podrían tener un impacto más substancial.” Él continúa escribiendo:

Entonces, ¿cómo debemos entender el éxito de esta institución? a la luz de su no hacer lo que le pedimos? Una forma de pensar en esto es que la educación puede no estar haciendo lo que pedimos, pero está haciendo lo que queremos. Queremos una institución que persiga nuestros objetivos sociales. de una manera que está en línea con el individualismo en el corazón del ideal liberal, con el objetivo de resolver problemas sociales buscando cambiar los corazones, las mentes y las capacidades de cada estudiante. Otra forma de decir esto es que queremos una institución a través de la cual podamos expresar nuestros objetivos sociales sin violar el principio de elección individual que se encuentra en el centro de la estructura social, incluso si esto tiene el costo de no lograr estos objetivos. Entonces la educación puede servir como un punto de orgullo cívico, un lugar de muestra para nuestros ideales, y un medio para participar en disputas edificantes pero que en última instancia son intrascendentes sobre visiones alternativas de la buena vida. Al mismo tiempo, también puede servir como un conveniente chivo expiatorio al que podemos culpar por su fracaso en lograr nuestras más altas aspiraciones para nosotros mismos como sociedad.

¿Cómo encajar en este marco los esfuerzos que se hacen para enseñar pensamiento computacional y ciudadanía digital en las escuelas? ¿Los entrenamientos intensivos evitan estas trampas o simplemente las entregan con una nueva apariencia?

Adopción Institucional ( clase completa/15)

Re-leer la lista de motivaciones para adoptar nuevas prácticas dadas en Section [s:outreach-schools]. ¿Cuáles de estos se aplican a tí y tus colegas? ¿Cuáles son irrelevantes en tu contexto? ¿Cual enfatizamos cuando y si interactúas con personas que trabajan en instituciones educativas formales?

Si al principio no tienes éxito (grupos pequeños/15)

W.C. Fields probablemente nunca dijo, “Si al principio no tienes éxito, inténtalo, inténtalo de nuevo. Entonces déjalo, no sirve de nada ser un tonto al respecto” Sigue siendo un buen consejo: si las personas con las que intenta comunicarse no responden, podría ser que nunca los convencerás. En grupos de 3 a 4, hagan una breve lista de señales de que se debe dejar de intentar hacer algo en lo que cree. ¿Cuántos de ellos ya son verdaderos?

Logrando/Haciendo que falle (individual/15)

[Farm2006] presenta algunas reglas ironicas para lograr que nuevas herramientas no sean adoptadas, todas de las cuales aplican a nuevas prácticas de enseñanza:

  1. Hacerlo opcional.

  2. Economizar en entrenamiento.

  3. No usarlas en un proyecto real.

  4. Nunca integrarlas.

  5. Usarlas esporádicamente.

  6. Hacerlas parte de una iniciativa de calidad

  7. Marginalizar al campeón.

  8. Capitalizar en los primeros errores.

  9. Hacer una inversión pequeña

  10. Explotar miedo, incertidumbre, duda, pereza e inercia.

¿Cuál de estas has visto hechas recientemente? ¿Cuales has hecho tú mismo? ¿Qué forma tuvieron?

Mentoring (whole class/15)

Mentoreo (todo la clase/todo el grupo/15)

The Institute for African-American Mentoring in Computer Science ha publicado guías para mentorear estudiantes de doctorado. Lea individualmente luego discutan como una clase y califica los esfuerzos para tu propio grupo como +1 (definitivamente haciendo), 0 (no estoy seguro o no es aplicable), o -1 (definitivamente no se está haciendo).

¿Por qué enseño?

Cuando comencé a trabajar como voluntario en la Universidad de Toronto, mis estudiantes me preguntaron por qué enseñaba gratis. Esta fue mi respuesta:

Cuando tenía tu edad, pensaba que las universidades existían para enseñarle a la gente a aprender. Más tarde, en la escuela de posgrado, pensaba que las universidades se dedicaban a investigar y a crear nuevos conocimientos. Sin embargo, ahora que estoy en mis cuarenta años de edad, pienso que lo que realmente te estamos enseñando es cómo hacerte cargo del mundo, porque vas a tener que hacerlo quieras o no.

Mis padres tienen setenta años. Ya no manejan el mundo; son las personas de mi edad quienes aprueban leyes y toman decisiones de vida o muerte en los hospitales. Y sin importar que tan aterrador sea , nosotras/os somos las personas adultas.

En veinte años, nosotras/os estaremos camino hacia la jubilación y estarás a cargo. Eso puede parecer mucho tiempo cuando tienes diecinueve años, pero se pasa en un suspiro. Por eso te damos problemas cuyas respuestas no se pueden encontrar en las notas del año pasado. Por eso te ponemos en situaciones en las que tienes que decidir qué hacer ahora, qué se puede dejar para más tarde y qué puedes simplemente ignorar. Porque si no aprendes cómo hacer estas cosas ahora, no estarás lista/o para hacerlo cuando sea necesario.

Todo esto era verdad, pero no es toda la historia. No quiero que la gente haga del mundo un lugar mejor para que yo me pueda retirar cómodamente. Quiero que lo hagan porque es la aventura más grande de nuestro tiempo. Hace ciento cincuenta años, la mayoría de las sociedades practicaban la esclavitud. Hace cien años, en Canadá, mi abuela no era legalmente considerada una persona. El año en que nací, la mayoría de las personas del mundo sufrían bajo algún régimen totalitario, y los jueces todavía dictaminaban terapia de electroshock para ”curar” a los homosexuales. Todavía hay muchas cosas que están mal en el mundo, pero mira cuántas opciones más que nuestros abuelos y abuelas tenemos. Mira cuántas cosas más podemos saber, ser y disfrutar porque finalmente nos estamos tomando en serio la Regla de Oro.

Hoy soy menos optimista que entonces. Cambio climático, extinción masiva, capitalismo de vigilancia, desigualdad a una escala que no hemos visto hace un siglo, el resurgimiento del nacionalismo racista: mi generación vio cómo sucedió todo y se quedó de brazos cruzados. La factura de nuestra cobardía, letargo y avaricia no se pagará hasta que mi hija crezca, pero llegará, y para cuando lo haga, no habrá soluciones fáciles para estos problemas (y posiblemente no hayan soluciones en absoluto).

Así que por eso enseño: Estoy enojado. Estoy enojado porque tu sexo, tu color y la riqueza y conexiones de tu madre y tu padre no deberían contar más que cuán inteligente, honesto/a o trabajador/a seas . Estoy enojado porque convertimos a Internet en una cloaca. Estoy enojado porque los nazis están en marcha una vez más y los multimillonarios juegan con cohetes espaciales mientras el planeta se derrite. Estoy enojado, entonces enseño, porque el mundo solo mejora cuando enseñamos a las personas cómo mejorarlo.

En su ensayo de 1947 “¿Por qué escribo?”, George Orwell escribió:

En una época pacífica, podría haber escrito libros superficiales, decorativos o simplemente descriptivos, y podría haber permanecido casi inconsciente de mis lealtades políticas. Pero tal como están las cosas, me he visto obligado a convertirme en una especie de panfletista Cada línea de trabajo serio que he escrito desde 1936 ha sido escrita, directa o indirectamente, en contra del totalitarismo Me parece una tontería, en un período como el nuestro, pensar que uno/a puede evitar escribir sobre tales temas. Todos escriben al respecto de una manera u otra. La cuestión es simplemente elegir de qué lado lo hacemos.

Reemplaza “escribir” por “enseñar” y tendrás la razón por la que hago lo que hago.

Gracias por leer — Espero que podamos enseñar juntos algún día. Hasta entonces:

Comienza donde estás.
Usa lo que tienes.
Ayuda a quien puedas.

Licencia

Este es un resumen de lectura sencilla para personas (y no un sustituto) de la licencia. Por favor mira https://creativecommons.org/licenses/by-nc/4.0/legalcode para el texto legal completo.

Este trabajo se licencia bajo Creative Commons Atribución – No Comercial 4.0 (CC-BY-NC-4.0).
Eres libre de:

  • Compartir—copiar y redistribuir el material en cualquier medio o formato

  • Adaptar—reacomodar, transformar y construir sobre el material.

El/la licenciante no puede revocar estas libertades mientras sigas los términos de la licencia.

Bajo los siguientes términos:

  • Atribución—Debes dar el crédito apropiado, proporcionar un enlace a la licencia e indicar si se hicieron cambios. Puedes hacerlo de cualquier manera razonable, pero siempre que no sugiriera que el/la licenciante te respalda a ti o al uso que le das al material.

  • No Comercial—No puedes utilizar el material con fines comerciales.

Sin restricciones adicionales—No puedes aplicar términos legales o medidas tecnológicas que restrinjan legalmente a otros/as de hacer cualquier cosa que la licencia permita.

Avisos:

  • No tienes que cumplir con la licencia para aquellos elementos del material que son de dominio público o cuando su uso esté permitido por una excepción o limitación aplicable.

  • No se otorgan garantías. Es posible que la licencia no otorgue todos los permisos necesarios para el uso que se pretende dar al material. Por ejemplo, derechos relacionados a la publicidad, privacidad o derechos morales pueden limitar la forma en la que puedes usar el material.

Código de Conducta

Con el objetivo de fomentar un ambiente abierto y amigable, las personas encargadas del proyecto, colaboradoras/es y personas de soporte , nos comprometemos a hacer de la participación en nuestro proyecto y en nuestra comunidad una experiencia libre de acoso para todas las personas, independientemente de edad, tamaño corporal, discapacidad, etnia, identidad y expresión de género, nivel de experiencia, educación, nivel socioeconómico, nacionalidad, apariencia personal, raza, religión o identidad y orientación sexual.

Nuestros Estándares

Ejemplos de comportamiento que contribuye a crear un ambiente positivo para nuestra comunidad:

  • utilizar un lenguaje amigable e inclusivo,

  • respetar diferentes puntos de vista y experiencias,

  • aceptar adecuadamente la crítica constructiva,

  • enfocarse en lo que es mejor para la comunidad y

  • mostrar empatía hacia otros miembros de la comunidad.

Ejemplos de comportamiento inaceptable:

  • el uso de lenguaje o imágenes sexualizadas así como dar atención o generar avances sexuales no deseados,

  • ofender o provocar de modo malintencionado (trolling), comentarios despectivos, insultantes y ataques personales o políticos,

  • acoso público o privado,

  • publicar información privada de otras personas, tales como direcciones físicas o de correo electrónico, sin su permiso explícito, y

  • otras conductas que puedan ser razonablemente consideradas como inapropiadas en un entorno profesional

Nuestras Responsabilidades

Las personas encargadas del proyecto somos responsables de aclarar los estándares de comportamiento aceptable y se espera que tomem medidas de acción correctivas apropiadas y justas en respuesta a cualquier caso de comportamiento inaceptable.

Las personas encargadas del proyecto tienen el derecho y la responsabilidad de eliminar, editar o rechazar comentarios, commits, código, ediciones en la wiki, issues y otras contribuciones que no estén alineadas con este Código de Conducta. También pueden prohibir la participación temporal o permanente de cualquier persona por comportamientos que sean considerados inapropiados, amenazantes, ofensivos o dañinos.

Alcance

Este Código de Conducta aplica tanto en espacios dentro del proyecto como en espacios públicos, mientras una persona represente al proyecto o a la comunidad. Ejemplos de representación del proyecto o la comunidad incluyen el uso de una dirección de correo electrónico oficial del proyecto, realizar publicaciones a través de una cuenta oficial de redes sociales o actuar como representante designada/o en cualquier evento presencial o en línea. La representación del proyecto puede ser aclarada y definida en más detalles por las personas encargadas.

Aplicación

Los casos de comportamiento abusivo, acosador o inaceptable pueden ser denunciados enviando un correo electrónico a la persona encargada del proyecto a la dirección gvwilson@third-bit.com. Todas las quejas serán revisadas e investigadas y darán como resultado una respuesta que se considere necesaria y apropiada a las circunstancias. El equipo encargado del proyecto está obligado a mantener la confidencialidad de quien reporte un incidente. Se pueden publicar por separado más detalles de políticas de aplicación específicas.

Aquellas personas encargadas del proyecto que no cumplan o apliquen este código de conducta de buena fé pueden enfrentar repercusiones temporales o permanentes determinadas por el resto del equipo encargado del proyecto.

Atribución

Este código de conducta es una adaptación del Contributor Covenant version 1.4

Unirse a nuestra comunidad

Esperamos que elijas ayudarnos a hacer lo mismo para este libro. Si esta forma de trabajo es nueva para ti, consulta Appendix 17 nuestro código de conducta, y luego:

Empieza pequeño.

Arregla un error tipográfico, aclara la redacción de un ejercicio, corrige o actualiza una cita, o sugiere un mejor ejemplo o analogía para ilustrar algún punto.

Únete a la conversación.

Mira los issues y los cambios propuestos por otras personas y añádeles tus comentarios. A menudo es posible mejorar las mejoras, y es una buena manera de presentarte a la comunidad y hacer nuevas amistades.

Discute, luego edita.

Si quieres proponer un gran cambio, como reorganizar o dividir un capítulo completo, por favor, completa un issue que describa tu propuesta y tu razonamiento y etiquétalo como “Proposal” (propuesta en inglés). Te alentamos a que agregues comentarios a estos issues para que toda la discusión sobre qué y por qué esté abierta y se pueda archivar. Si se acepta la propuesta, el trabajo real puede dividirse en varios problemas o cambios más pequeños que se pueden abordar de forma independiente.

Usando este material

Como se declaró en Chapter 1, todo este material puede distribuirse y reutilizarse libremente bajo la licencia Creative Commons Atribución – No Comercial 4.0 (Appendix 16). Puedes usar la versión en línea en http://teachtogether.tech/ en cualquier clase (gratuita o de pago), y puedes citar extractos breves bajo las disposiciones de uso justo, pero no puedes volver a publicar fragmentos grandes en obras comerciales sin permiso previo.

Este material ha sido usado de muchas maneras, desde una clase en línea de varias semanas hasta un taller intensivo en persona. Por lo general, es posible cubrir grandes partes de los capítulos Chapter 2 a Chapter 6, Chapter 8, y Chapter 10 en dos días de jornada completa.

En persona

Esta es la forma más efectiva de impartir esta capacitación, pero también la más exigente. Las personas que participan están físicamente en el mismo lugar. Cuando necesitan practicar cómo enseñar en pequeños grupos, parte de la clase o toda la clase va a espacios de descanso cercanos. Cada participante usa su propia tableta o computadora portátil para ver material en línea durante la clase y para tomar notas compartidas (Section 9.7), y usa lápiz y papel o pizarras para otros ejercicios. Las preguntas y la discusión se hacen en voz alta.

Si estás enseñando en este formato, debes usar notas adhesivas como indicadores de estado para que puedas ver quién necesita ayuda, quién tiene preguntas y quién está listo/a para seguir adelante (Section 9.8). También debes usarlos para distribuir la atención, para que todos obtengan tu atención y tiempo de forma justa, como tarjetas de minutos para alentar a tus estudiantes a reflexionar sobre lo que acaban de aprender y para darte retroalimentación procesable mientras todavía tienes tiempo para actuar en consecuencia.

En línea en grupos

En este formato, 10 a 40 estudiantes se juntan en 2 a 6 grupos de 4 a 12 personas, pero esos grupos están distribuidos geográficamente. Cada grupo usa una cámara y un micrófono para conectarse a la videollamada, en lugar de que cada persona esté en la llamada por separado. Un buen audio es más importante que un buen video en ambas direcciones: una voz sin imágenes (como la radio) es mucho más fácil de entender que las imágenes sin narrativa, y los/as instructores/as no necesitan poder ver personas para responder preguntas, siempre y cuando esas preguntas se puedan escuchar con claridad. Dicho esto, si una lección no es accesible, entonces no es útil (Section 10.3): proporcionar texto descriptivo es una ayuda cuando la calidad del audio es deficiente, e incluso si el audio es bueno resulta importante para aquellas personas con dificultades auditivas.

Toda la clase toma notas compartidas, y también usa las notas compartidas para hacer y responder preguntas. Tener varias decenas de personas tratando de hablar en una llamada no funciona bien, así que en la mayoría de las sesiones el/la profesor/a habla y sus estudiantes responden a través del chat de la herramienta para tomar notas.

En línea de forma individual

La extensión natural de estar en línea en grupos es estar en línea en forma individual. Al igual que con los grupos en línea, el/la docente hablará la mayoría de las veces y los/las estudiantes participarán principalmente a través del chat de texto. También en este caso, un buen audio es más importante que un buen video, y quienes participan deberían usar el chat de texto para indicar que quieren hablar(Appendix 20).

Tener participantes en línea individualmente hace que sea más difícil dibujar y compartir mapas conceptuales (Section 3.4) o dar retroalimentación sobre la enseñanza (Section 8.5). Por lo tanto, quienes enseñen deberán confiar más en el uso de ejercicios con resultados escritos que se puedan poner en las notas compartidas, como por ejemplo dar una devolución sobre videos de personas enseñando.

Multi-semana en línea

La clase se reúne todas las semanas durante una hora a través de videoconferencia. Cada reunión puede realizarse dos veces para acomodar las zonas horarias y los horarios de los/las estudiantes. Los/las participantes toman notas compartidas como se describió anteriormente para las clases grupales en línea, para publicar tareas en línea entre clases, y para comentar sobre el trabajo de los demás. En la práctica, los comentarios son relativamente raros: la gente prefiere discutir el material en las reuniones semanales.

Este fue el primer formato utilizado, y ya no lo recomiendo: mientras que extender la clase les da a las personas tiempo para reflexionar y abordar ejercicios más extensos, también aumenta en gran medida las probabilidades de que tengan que abandonar debido a otras demandas de su tiempo.

Contribuyendo y Manteniendo

Contribuciones de todo tipo son bienvenidas, desde sugerencias para mejoras hasta erratas y nuevo material. Todas las personas que contribuyan deben cumplir con nuestro Código de Conducta (Appendix 17); al enviar tu trabajo, aceptas que pueda incorporarse tanto en forma original como editada y que pueda ser publicado bajo la misma licencia que el resto de este material (Appendix 16).

Si tu material es incorporado, te agregaremos a los agradecimientos (Section 1.3) a menos que solicites lo contrario.

La fuente de la versión original de este libro se almacena en GitHub en:

https://github.com/gvwilson/teachtogether.tech/

Si sabes cómo usar Git y GitHub y deseas cambiar, arreglar o agregar algo, por favor envía un pull request que modifique la fuente del LaTeX. Si deseas obtener una vista previa de tus cambios, por favor ejecuta make pdf o make html en la línea de comandos.

Si quieres reportar un error, hacer una pregunta, o hacer una sugerencia, presenta un issue en el repositorio. Necesitas tener una cuenta de GitHub para hacer esto, pero no necesitas saber cómo usar Git.

Si no deseas crear una cuenta de GitHub, envía tu contribución por correo electrónico a gvwilson@third-bit.com con “T3” o “Teaching Tech Together” en algún lugar del asunto. Intentaremos responder en una semana.

Finalmente, siempre nos gusta escuchar cómo se ha usado este material, y estamos siempre agradecidos/as por el aporte de más diagramas.

Glosario

Agotamiento del ego

El deterioro del autocontrol debido del uso prolongado o intensivo. Trabajo recientes no han podido corroborar su existencia.

Amenaza del estereotipo

Una situación en la que las personas sienten que corren el riesgo de ser sometidas a los estereotipos de su grupo social.

Andamiaje

Se proporciona material adicional a las/os estudiantes en etapa inicial para ayudarlas/os a resolver problemas.

Aprendizaje activo

Un enfoque de la enseñanza en el que los estudiantes se involucran con el material a través de la discusión, resolución de problemas, estudios de casos, y otras actividades que requieren que reflexionen y usen nueva información en tiempo real. Ver también aprendizaje pasivo.

Aprendizaje basado en la indagación

La práctica de permitir que las/os estudiantes hagan sus propias preguntas, establezcan sus propios objetivos y encuentren su propio camino a través de un tema.

Aprendizaje cognitivo

Una teoría de aprendizaje que enfatiza el proceso del docente que transmite habilidades e ideas situacionalmente al estudiante.

Aprendizaje pasivo

Un enfoque de la enseñanza en el que las/os estudiantes leen, escuchan o miran sin utilizar inmediatamente nuevos conocimientos. El aprendizaje pasivo es menos efectivo que el aprendizaje activo.

Aprendizaje personalizado

Adaptación automática de lecciones para satisfacer las necesidades de los alumnos individuales.

Aprendizaje situado

Un modelo de aprendizaje que se centra en la transición de las personas de ser recién llegadas a ser miembros aceptados de una comunidad de práctica.

Artefacto tangible

Algo en lo que una/un estudiante puede trabajar y cuyo estado proporciona retroalimentación sobre el progreso que la/el estudiante realizó y ayuda a diagnosticar errores.

Aula invertida

Una clase en la que los alumnos ven lecciones grabadas en su propio tiempo, mientras que el tiempo de clase se utiliza para resolver conjuntos de problemas y responder preguntas.

Automaticidad

La capacidad de hacer una tarea sin concentrarse en sus detalles de bajo nivel.

Carga cognitiva

El esfuerzo mental necesario para resolver un problema. La teoría de la carga cognitiva lo divide en carga intrínseca, carga pertinente y carga extrínseca. Sostiene que las personas aprenden más rápido cuando se reduce la carga pertinente y extraña.

Carga extrínseca

Cualquier carga cognitiva que distrae del aprendizaje.

Carga instrínseca

La carga cognitiva requerida para absorber nueva información.

Carga pertinente

La carga cognitiva requerida para vincular la nueva información con la antigua.

Ciencia de implementación

El estudio de cómo traducir los hallazgos de la investigación a la práctica clínica diaria.

Ciencias de la computación acústico

Un estilo de enseñanza que introduce conceptos informáticos utilizando ejemplos y artefactos que no son de programación.

Co-enseñanza

Enseñar con otro docente en el salón de clases.

Cognición externalizada

El uso de ayuda gráfica, física o verbal para aumentar el pensamiento.

Cognitivismo

Una teoría del aprendizaje que sostiene que los estados y procesos mentales pueden y deben incluirse en modelos de aprendizaje. Ver también conductivismo.

Commons

algo gestionado conjuntamente por una comunidad de acuerdo con las reglas que la misma comunidad ha desarrollado y adoptado.

Comunidad de práctica

Un grupo de personas que se perpetúan a sí mismas y comparten y desarrollan un oficio como tejedoras/es, músicas/os o programadoras/es. Ver también participación inicial legítima.

Conductismo

Una teoría del aprendizaje cuyo principio central es estímulo y respuesta, y cuyo objetivo es explicar el comportamiento sin recurrir a estados mentales internos u otros inobservables. Ver además cognitivismo.

Conectivismo

Una teoría del aprendizaje que sostiene que el conocimiento se distribuye, que el aprendizaje es el proceso de navegación, crecimiento y poda de conexiones, y que enfatiza los aspectos sociales del aprendizaje hechos posibles por Internet.

Conocimiento de contenido pedagógico

(PCK, por sus siglas en Inglés) La comprensión de cómo enseñar un tema en particular, es decir, el mejor orden en el cual introducir temas y qué ejemplos usar. Ver también conocimiento del contenido y conocimiento pedagógico general.

Conocimiento del contenido

La comprensión de una persona de un tema. Ver también conocimiento pedagógico general y conocimiento de contenido pedagógico.

Conocimiento pedagógico general

La comprensión de una persona de los principios generales de la enseñanza. Ver también conocimiento del contenido y conocimiento de contenido pedagógico.

Constructivismo

Una teoría del aprendizaje que considera a las/os estudiantes construyendo activamente el conocimiento.

Contribuyendo a la pedagogía estudiantil

Tener a las/os estudiantes produciendo artefactos para contribuir al aprendizaje de otros.

CS0. Introducción a las ciencias de la computación

Un curso introductorio de nivel universitario sobre computación dirigido a estudiantes no avanzados con poca o ninguna experiencia previa en programación.

CS1. ciencias de la computación I

Un curso introductorio de ciencias de la computación a nivel universitario, generalmente de un semestre, que se enfoca en variables, bucles, funciones y otras mecánicas básicas.

CS2. ciencias de la computación II

Un segundo curso de ciencias de la computación de nivel universitario que generalmente presenta estructuras de datos básicas como pilas, colas y diccionarios.

Cursos on-line masivos

(MOOC, por sus siglas en Inglés) Un curso en línea diseñado para la inscripción masiva y el estudio asincrónico, que generalmente usa videos grabados y calificaciones automáticas.

Desarrollo basado en test

Una práctica de desarrollo de software en la que las/los programadoras/es escriben primero los test para darse objetivos concretos y aclarar su comprensión de cómo se ve "terminado".

Directorio o junta de servicio

Una junta cuyos miembros asumen roles de trabajo en la organización.

Directorio. Junta

Una junta cuya responsabilidad principal es contratar, supervisar y, si es necesario, despedir al director.

Discurso de presentación

Una breve descripción de una idea, proyecto, producto o persona que se puede dar y comprender en solo unos segundos.

Diseño instruccional

El arte de crear y evaluar lecciones específicas para audiencias específicas. Ver también psicología educacional.

Distractor pausible

Una respuesta incorrecta, pero que podría ser correcta, a una pregunta de opción múltiple. Ver también poder de diagnóstico.

Efecto de atención dividida

La disminución que ocurre en el aprendizaje cuando las/os estudiantes deben dividir su atención entre múltiples presentaciones concurrentes de la misma información (por ejemplo, subtítulos y una voz en off).

Efecto de hipercorrección

Cuanto más creé alguien que su respuesta en un exámen era correcta, más probabilidades hay de que no repitan el error una vez que descubre que, de hecho, estaban equivocados.

Efecto Dunning-Kruger

La tendencia de las personas que solo saben un poco sobre un tema a estimar incorrectamente su comprensión del mismo.

Efecto inverso de la experiencia

La forma en que la instrucción que es efectiva para los novatos se vuelve ineficaz para los profesionales competentes o expertos.

Ejemplos desvanecidos

Una serie de ejemplos en los que se borra un número cada vez mayor de pasos clave. Ver también andamiaje.

Enseñando para el exámen

Cualquier método de "educación" que se centre en preparar a las/los estudiantes para aprobar los exámenes estandarizados, en lugar de aprender realmente.

Enseñanza activa

Un enfoque de la instrucción en el que el maestro actúa sobre la nueva información adquirida de los alumnos mientras enseña (ej. cambiando dinámicamente un ejemplo o reorganizando el orden previsto del contenido). Ver también enseñanza pasiva.

Enseñanza pasiva

Un enfoque a la enseñanza en el que la/el docente no ajusta el ritmo o los ejemplos, o no actúa de acuerdo con los comentarios de las/os estudiantes, durante la lección. Ver también enseñanza activa.

Estudiante free-range. Estudiante de rango libre.

Alguien que aprende fuera de un aula institucional con un plan de estudios y tareas obligatorias. (Quienes usan este término, ocasionalmente se refieren a los estudiantes en las aulas como "estudiantes battery-farmed", pero nosotros no lo hacemos porque sería grosero).

Estudiante tipo

Una breve descripción de un/a estudiante objetivo tipo para una lección que incluye: sus antecedentes generales, lo que ya sabe, lo que quiere hacer, cómo la lección le ayudará y cualquier necesidad especial que puedan tener.

Etiquetado de submetas

Dar nombres a cada paso en una descripción paso a paso de un proceso de resolución de problemas.

Evaluación formativa

Evaluación que se lleva a cabo durante una clase para dar retroalimentación tanto al alumno como al profesor sobre la comprensión real. Ver también evaluación sumativa.

Evaluación sumativa

Evaluación que se realiza al final de una lección para determinar si se ha realizado el aprendizaje deseado.

Experta/o

Alguien que puede diagnosticar y manejar situaciones inusuales, sabe cuándo no se aplican las reglas habituales y tiende a reconocer soluciones en lugar de razonarlas. Ver también practicante competente y novata/o.

Falla productiva

Una situación en la que a las/os estudiantes se les dan deliberadamente, problemas que no se pueden resolver con el conocimiento que tienen y deben salir y adquirir nueva información para progresar. Ver también Zona de Desarrollo Proximal.

Falso principiante

Alguien que ha estudiado un idioma antes pero lo está aprendiendo nuevamente. Los falsos principiantes comienzan en el mismo punto que los principiantes verdaderos (es decir, en una evaluación inicial mostrarán el mismo nivel de competencia) pero pueden avanzar mucho más rápidamente.

Flujo

La sensación de estar completamente inmerso en una actividad, frecuentemente asociada con una alta productividad.

Fuzz testing

Una técnica de prueba de software basada en generar y enviar datos aleatorios.

Hashing

Generar una clave digital pseudoaleatoria condensada a partir de datos; cualquier entrada específica produce la misma salida, pero es muy probable que diferentes entradas produzcan diferentes salidas.

Impotencia aprendida

Una situación en la que las personas que son sometidas repetidamente a comentarios negativos de que no tienen forma de escapar aprenden a ni siquiera intentar escapar cuando pueden.

Inclusión

Trabajar activamente para incluir personas con diversos antecedentes y necesidades.

Instrucción directa

Un método de enseñanza centrado en un diseño curricular meticuloso dictado a través de guiones pre-escritos.

Instrucción por pares

Un método de enseñanza en el que la/el docente hace una pregunta y luego las/os estudiantes se comprometen con una primera respuesta, discuten las respuestas con sus compañeras/os y se comprometen con una respuesta revisada.

Integración Computacional

Usar la informática para volver a implementar artefactos culturales preexistentes, por ejemplo, crear variantes de diseños tradicionales usando herramientas de dibujo por computadora.

Intuición

La capacidad de comprender algo de inmediato, sin necesidad aparente de razonamiento consciente.

Inventario de conceptos

Una prueba diseñada para determinar qué tan bien un alumno comprende un dominio. A diferencia de la mayoría de las pruebas realizadas por instructores, los inventarios de conceptos se basan en una extensa investigación y validación.

Jugyokenkyu

Literalmente "estudio de lección", un conjunto de prácticas que incluye hacer que las/os docentes se observen rutinariamente entre sí y discutan las lecciones que imparten para compartir conocimientos y mejorar habilidades.

Leccion de demostracion

Una lección dictada por un/a docente a estudiantes reales mientras otras/os docentes observan para aprender nuevas técnicas de enseñanza.

Leer-cubrir-recuperar

Una práctica de estudio en la que la/el estudiante cubre hechos o términos clave durante un primer paso por el material y luego verifica cuanto recuerda en un segundo paso.

Manual mínimo

Un enfoque de capacitación que divide cada tarea en instrucciones de una sola página que también explican cómo diagnosticar y corregir errores comunes.

Manual

Material de referencia, destinado a ayudar a alguien que ya comprende un tema, a completar (o recordar) detalles.

Mapa conceptual

Una imagen de un modelo mental en el que los conceptos son nodos en un gráfico y las relaciones entre esos conceptos son arcos (etiquetados).

Máquina nocional

Un modelo general simplificado de cómo se ejecuta una familia particular de programas.

Marca

Las asociaciones que las personas tienen con el nombre de un producto o identidad.

Marketing

El arte de ver las cosas desde la perspectiva de otras personas, comprender sus deseos y necesidades y encontrar formas de satisfacerlas.

Memoria de corto plazo

La parte de la memoria que almacena brevemente información a la que puede acceder directamente la conciencia.

Memoria de largo plazo

La parte de la memoria que almacena información durante largos períodos de tiempo. La memoria a largo plazo es muy grande, pero lenta. Ver también memoria de corto plazo.

Memoria de trabajo

Vermemoria de corto plazo.

Memoria persistente

Ver memoria de largo plazo.

Mentalidad de crecimiento

La creencia de que la habilidad viene con la práctica. Ver también mentalidad fija.

Mentalidad fija

La creencia de que una habilidad es innata y que el fracaso se debe a la falta de algún atributo necesario. Ver también mentalidad de crecimiento.

Metacognición

Pensar sobre pensar.

Modelo deficitario

La idea de que algunos grupos están subrepresentados en informática (o algún otro campo) porque sus miembros carecen de algún atributo o calidad.

Modelo mental

Una representación simplificada de los elementos clave y las relaciones de algunos dominios de problemas que es lo suficientemente buena como para apoyar la resolución de problemas.

Motivación extrínseca

Ser impulsado por recompensas externas como el pago o el miedo al castigo. Ver también motivación intrínseca.

Motivación intrínseca

Ser impulsada/o por el disfrute o la satisfacción de hacer una tarea como fin en si mismo. Ver también motivación extrínseca.

Motor de optimización de posicionamiento en buscadores

(SEO, por sus siglas en Inglés) Aumentar la cantidad y la calidad del tráfico del sitio web al hacer que las páginas sean más fáciles de encontrar o parezcan más importantes para los motores de búsqueda.

Notas guiadas

Notas preparadas por la/el docente que indican a las/os estudiantes que respondan a la información clave en una conferencia o discusión.

Novata/o. Persona novata. Principiante

Alguien que aún no ha construido un modelo mental utilizable de un dominio. Ver también Practicantes competentes y experta/o.

Objetivo de aprendizaje

Qué está intentando lograr la lección.

Paradoja de la reusabilidad

Sostiene que cuanto más reutilizable es una lección, es menos efectiva pedagógicamente.

Fragmentación

El acto de agrupar conceptos relacionados juntos que pueden almacenarse y procesarse como una sola unidad.

Participación inicial legítima

La participación de las/os recién llegadas/os en tareas simples y de bajo riesgo que una comunidad de práctica reconoce como contribuciones válidas.

Pensamiento computacional

Pensar la resolución de problemas en formas inspiradas en la programación (aunque el término se usa de muchas otras maneras).

Piensa-trabaja en pareja-comparte

Un método de colaboración en el que cada persona piensa individualmente sobre una pregunta o problema, luego se junta con otra/o compañera/o para compartir ideas, y luego una persona de cada pareja presenta para todo el grupo.

Poder de diagnóstico

El grado en que una respuesta incorrecta a una pregunta o ejercicio le dice al docente qué conceptos erróneos tiene un/a estudiante en particular.

Posicionamiento

Lo que diferencia a una marca de otras marcas similares.

Practica deliberada

El acto de observar el desempeño de una tarea mientras se realiza para mejorar la capacidad.

Práctica reflexiva

Ver practica deliberada.

Practicante competente

Alguien que puede realizar tareas normales con un esfuerzo normal en circunstancias normales. Ver también principiante and experta/o.

Primero los objetos

Un enfoque para enseñar programación donde objetos y clases se introducen temprano.

Principiante absoluto

Alguien que nunca se ha encontrado con los conceptos o material antes. El término se usa en distinción para falso principiante.

Privilegio preparatorio

La ventaja de provenir de un entorno que proporciona más preparación para una tarea de aprendizaje en particular que otras.

Problemas de Parson

Una técnica de evaluación desarrollada por Dale Parsons y otros en la que los alumnos reorganizan el material dado para construir una respuesta correcta a una pregunta. [Pars2006].

Programación en pareja

Una práctica de desarrollo de software en la que dos programadores comparten una computadora. Un programador (el piloto) escribe, mientras que el otro (el navegante) ofrece comentarios y sugerencias en tiempo real. La programación en pareja a menudo se usa como práctica docente en las clases de programación.

Programación en vivo

El acto de enseñar programación escribiendo software frente a los alumnos a medida que avanza la lección.

Programadora conversacional

Alguien que necesita saber lo suficiente sobre computación para tener una conversación significativa con un programador, pero no que va a programar por sí mismo.

Psicología Educacional

El estudio de cómo la gente aprende. Ver también diseño instruccional.

Pull request

Un conjunto de cambios propuestos a un repositorio de GitHub que pueden revisarse, actualizarse y, finalmente, agregarse al repositorio.

Punto ciego del experto

La incapacidad de las personas expertas para empatizar con las personas novatas que se encuentran por primera vez con conceptos o prácticas.

Reingeniería

Un método de diseño instruccional que trabaja hacia atrás desde una evaluación sumativa hasta evaluaciones formativas y desde allí al contenido de la lección.

Representación de la comunidad

Usar el capital cultural para resaltar las identidades sociales, las historias y las redes comunitarias de las/os estudiantes en las actividades de aprendizaje.

Representación fluida

La capacidad de moverse rápidamente entre diferentes modelos de un problema.

Resultado de aprendizaje

Qué es lo que la lección realmente logra.

Revisión por pares calibrada

Hacer que los alumnos comparen sus revisiones del trabajo de ejemplo con las de un maestro antes de que se les permita revisar el trabajo de sus pares.

Síndrome del impostor

Un sentimiento de inseguridad sobre los logros propios, que se manifiesta como un miedo a ser expuesto como un fraude.

Sistema de gestión de aprendizaje

(LMS, por sus siglas en Inglés): Una aplicación para registrar la inscripción a cursos, presentaciones de ejercicios, calificaciones y otros aspectos burocráticos del aprendizaje formal en el aula.

Tarea auténtica

Una tarea que contiene elementos importantes de cosas que los alumnos harían en situaciones reales (fuera del aula). Para ser auténtica, una tarea debe requerir que los alumnos construyan sus propias respuestas en lugar de elegir entre las respuestas proporcionadas, y trabajar con las mismas herramientas y datos que usarían en la vida real.

Tarjetas de minutos

Una técnica de retroalimentación en la que las/os estudiantes pasan un minuto escribiendo una cosa positiva sobre una lección (por ejemplo, una cosa que han aprendido) y una cosa negativa (por ejemplo, una pregunta que aún no ha sido respondida).

Taxonomía de Bloom

Una clasificación jerarquíca, ampliamente adoptada, de seis estapas de comprensión y cuyos niveles son conocimiento, comprensión, aplicación, análisis, síntesis y evaluación. Ver también Taxonomía de Fink.

Taxonomía de Fink

Una clasificación de comprensión no jerárquica de seis partes, propuesta por primera vez en [Fink2013] cuyas categorías son conocimiento fundamental, aplicación, integración, dimensión humana, cuidado y aprender a aprender. Ver también Taxonomía de Bloom.

Transferencia apropiada de procesamiento

La mejora en recordar que ocurre cuando la práctica utiliza actividades similares a las utilizadas en los test.

Transferencia cercana

Transferencia de aprendizaje entre dominios estrechamente relacionados, por ejemplo, mejora en la comprensión de decimales como resultado de hacer ejercicios con fracciones.

Transferencia de aprendizaje

Aplicar el conocimiento aprendido en un contexto a problemas en otro contexto. Ver también transferencia cercana y transferencia lejana.

Transferencia lejana

La transferencia de aprendizaje entre dominios ampliamente separados, por ejemplo, mejora en las habilidades matemáticas como resultado de jugar al ajedrez.

Tutorial

Una lección destinada a ayudar a alguien a mejorar su comprensión general de un tema.

Twitch coding

Hacer que un grupo de personas decida momento a momento o línea por línea qué agregarle a un programa a continuación.

Usuario final docente

Por analogía con usuario final programador, alguien que enseña con frecuencia, pero cuya ocupación principal no es la enseñanza, que tiene poca o ninguna experiencia en pedagogía y que puede trabajar fuera de las aulas institucionales.

Usuario final programador

Alguien que no se considera un programador, pero que, sin embargo, escribe y depura software, como por ejemplo, un artista que crea macros complejas para una herramienta de dibujo.

Zona de Desarrollo Proximal

(ZPD, por sus siglas en Inglés) Incluye el conjunto de problemas que las personas aún no pueden resolver por sí mismas pero que pueden resolver con la ayuda de un mentor más experimentado. Ver también falla productiva.

Reuniones, Reuniones, Reuniones

La mayoría de la gente es muy mala al organizar reuniones: no llevan una agenda, no se toman unos minutos, hablan vagamente o se desvían en irrelevancias, dicen algo trivial o repiten lo que otros han dicho sólo para decir algo, y mantienen conversaciones paralelas (lo cual garantiza que la reunión será una pérdida de tiempo). Saber cómo organizar una reunión de manera eficiente es una habilidad central para cualquiera que desee terminar bien el trabajo; saber cómo participar en la reunión de otra persona es igual de importante (y aunque recibe mucha menos atención, como dijo una colega una vez: todos ofrecen entrenamiento de líderes pero nadie ofrece entrenamiento de seguidores).

Las reglas más importantes para hacer que las reuniones sean eficientes no son secretas, pero rara vez se siguen:

Decide si realmente se necesita una reunión.

Si el único propósito es compartir información, envía un breve correo electrónico en su lugar. Recuerda, puedes leer más rápido que cualquiera pueda hablar: si alguien tiene datos para que el resto del equipo los asimile, la forma más educada de comunicarlos es escribirlos.

Escribe una agenda.

Si a nadie le importa lo suficiente la reunión como para escribir una lista de puntos de lo que se discutirá, la reunión probablemente no se necesita.

Incluye horarios en la agenda.

Las agendas también pueden ayudarte para evitar que los primeros puntos le roben tiempo a los últimos si incluyes el tiempo que le dedicarás a cada punto en la agenda. Tus primeras estimaciones con cualquier grupo nuevo serán tremendamente optimistas, así que revísalas nuevamente para las siguientes reuniones. Sin embargo, no deberías planear una segunda o tercera reunión porque no alcanzó el tiempo: en cambio, trata de averiguar por qué ocupaste tiempo extra y arregla el problema que lo originó.

Prioriza.

Cada reunión es un microproyecto, por lo tanto el trabajo debería priorizarse de la misma manera que se hace para otros proyectos: aquello que tendrá alto impacto pero lleva poco tiempo debería realizarse primero, y aquello que tomará mucho tiempo pero tiene bajo impacto debería omitirse.

Haz a una persona responsable de mantener las cosas en movimiento.

Una persona debería tener la tarea de mantener los puntos a tiempo, llamando la atención a la gente que esté revisando correo electrónico o teniendo conversaciones paralelas, pidiendo a aquellos que están hablando mucho que lleguen al punto, e invitando a gente que no interviene que exprese su opinión. Esta persona no debería hacer toda la charla; en realidad, en una reunión bien armada aquel que esté a cargo hablará menos que los otros participantes.

Pide amabilidad.

Que nadie llegue a ser grosero, que nadie empiece a divagar, y si alguien se sale del tema es tanto el derecho como la responsabilidad del moderador decir “Discutamos eso en otro lado”.

Sin interrupciones.

Los participantes deben levantar la mano o poner una nota adhesiva si quieren hablar después. Si la persona que está hablando no los nota, quien modera la reunión debería hacerlo.

Sin tecnología

A menos que sea necesario por razones de accesibilidad insistir amablemente que todos guarden sus teléfonos, tabletas y computadoras. (p.ej. Por favor, cierren sus aparatos electrónicos).

Registro de minutas.

Alguna otra persona que no sea quien modere debería tomar notas de forma puntual sobre los fragmentos más importantes de información compartida, todas las decisiones tomadas y todas las tareas que se asignaron a alguien.

Toma notas.

Mientras otras personas están hablando, los participantes deberían tomar notas de preguntas que quieran hacer o de observaciones que quieran realizar (te sorprenderás qué inteligente parecerás cuando llegue tu turno para hablar).

Termina temprano.

Si tu reunión está programada de 10:00 a 11:00, debes intentar terminar a las 10:50 para dar tiempo a la gente de pasar por el baño en su camino a donde vayan luego.

Tan pronto termina la reunión, envía a todos un correo electrónico con la minuta o publícala en la web:

La gente que no estuvo en la reunión puede mantenerse al tanto de lo que ocurrió.

Una página web o un mensaje de correo electrónico es una forma mucho más eficiente de ponerse al día que preguntarle a un compañero de equipo qué te perdiste.

Cualquiera puede comprobar lo que realmente se dijo o prometió.

Más de una vez he revisado la minuta de una reunión en la que estuve y pensé: “Yo dije eso?” o “Espera un minuto, yo no prometí tenerlo listo para entonces!” Accidentalmente o no, muchas veces la gente recordará las cosas de manera diferente; escribirlo da la oportunidad a los miembros del equipo de corregir errores, lo que puede ahorrar muchos malos entendidos más tarde.

Las personas pueden ser responsables en reuniones posteriores.

No tiene sentido hacer listas de preguntas y puntos de acción si después no los sigues. Si estás utilizando algún tipo de sistema de seguimiento de temas, crea un tema por cada pregunta o tarea justo después de la reunión y actualiza los que se cumplieron, luego comienza cada reunión pasando por una lista de esos temas.

[Brow2007,Broo2016,Roge2018] tienen muchos consejos para organizar reuniones. Según mi experiencia, una hora de entrenamiento en cómo ser moderador es una de las mejores inversiones que harás.

Notas Adhesivas y Bingo para Interrupción

Algunas personas están tan acostumbradas al sonido de su propia voz que insistirán en hablar la mitad del tiempo sin importar cuántas personas haya en la habitación. Para evitar esto entrega a todos tres notas adhesivas al comienzo de la reunión. Cada vez que hablen tienen que sacar una nota adhesiva. Cuando se queden sin notas no se les permitirá hablar hasta que todos hayan usado al menos una. En ese momento todos recuperan sus tres notas adhesivas. Esto asegura que nadie hable más de tres veces que la persona más callada de la reunión, y cambia completamente la dinámica de la mayoría de los grupos: personas que dejan de intentar ser escuchadas porque siempre son tapadas de repente tienen espacio para contribuir, y aquellas que hablaban con demasiada frecuencia se dan cuenta lo injustos que han sido370.

Otra técnica es un bingo de interrupción. Dibuja una tabla y etiqueta las filas y columnas con los nombres de los participantes. Agrega en la celda apropiada una marca para contar cada vez que alguien interrumpa a otro, y toma un momento para compartir los resultados a la mitad de la reunión. En la mayoría de los casos verás que una o dos personas son las que interrumpen siempre, a menudo sin ser conscientes de ello. Eso solo muchas veces es suficiente para detenerlas. Nota que esta técnica está destinada a manejar las interrupciones, no el tiempo de conversación: puede ser apropiado que las personas con más conocimiento de un tema hablen sobre él con más frecuencia en una reunión, pero nunca es apropiado cortar repetidamente a las personas.

Las reglas de Martha

Las organizaciones de todo el mundo realizan sus reuniones de acuerdo a Reglas de Orden de Roberto, pero son mucho más formales que lo requerido para proyectos pequeños. Una ligera alternativa conocida como “Las reglas de Martha” puede que sea mucho mejor para la toma de decisiones por consenso [Mina1986]:

  1. Antes de cada reunión cualquiera que lo desee puede patrocinar una propuesta compartiéndola con el grupo. Las propuestas deben ser archivadas al menos 24 horas antes de una reunión para ser consideradas en esa reunión, y deben incluir:

    • un resumen de una línea;

    • el texto completo de la propuesta;

    • cualquier información de antecedentes requerida;

    • pros y contras; y

    • posibles alternativas

    Las propuestas deberían ser a lo sumo de 2 páginas.

  2. Se establece un quórum en una reunión si la mitad o más de los miembros votantes están presentes.

  3. Una vez que una persona patrocina una propuesta es responsable de ella. El grupo no puede discutir o votar sobre el tema a menos que quien patrocina o su delegado esté presente. La persona patrocinadora también es responsable de presentar el tema al grupo.

  4. Después que la persona patrocinadora presente la propuesta se emite un voto preliminar para la propuesta antes de cualquier discusión:

    • ¿A quién le gusta la propuesta?

    • ¿A quién le parece razonable la propuesta?

    • ¿Quién se siente incómodo con la propuesta?

    Los votos preliminares se pueden hacer con el pulgar hacia arriba, el pulgar hacia los lados o el pulgar hacia abajo (en persona) o escribiendo +1, 0 o -1 en el chat en línea (en reuniones virtuales).

  5. Si a todos o a la mayoría del grupo le gusta o resulta razonable la propuesta, se pasa inmediatamente a una votación formal sin más discusión.

  6. Si la mayoría del grupo está disconforme con la propuesta se pospone para que la persona patrocinadora pueda volver a trabajar sobre ella.

  7. Si algunos miembros se sienten disconformes pueden expresar brevemente sus objeciones. Luego se establece un temporizador para una breve discusión moderada por una persona facilitadora. Después de diez minutos o cuando nadie más tenga algo que agregar (lo que ocurra primero), quien facilita llama a una votación sí-o-no sobre la pregunta: “¿Deberíamos implementar esta decisión aun con las objeciones establecidas?” Si la mayoría vota “sí” la propuesta se implementa. De lo contrario, la propuesta se devuelve a la persona patrocinadora para trabajarla más.

Reuniones en línea

Discusión de Chelsea Troy de por qué las reuniones en línea son a menudo frustrantes e improductivas resulta un punto importante: en la mayoría de las reuniones en línea la primera persona en hablar durante una pausa toma la palabra. ¿El resultado? “Si tienes algo que quieres decir, tienes que dejar de escuchar a la persona que está hablando actualmente y en lugar de eso, enfócate en cuándo van a detenerse o terminar, para que puedas saltar sobre ese nanosegundo de silencio y ser el primero en pronunciar algo. El formato alienta a los participantes que deseen contribuir a decir más y escuchar menos.”

La solución es chatear (charla en texto) a la par de la videoconferencia donde las personas pueden indicar que quieren hablar. Quien modere entonces selecciona personas de la lista de espera. Si la reunión es grande o argumentativa, mantener a todos silenciados y solo permitir a quien modere liberar el micrófono a las personas.

La autopsia

Cada proyecto debe terminar con una autopsia en la que los participantes reflexionan sobre lo que acaban de lograr y qué podrían mejorar la próxima vez. Su objetivo es no señalar con el dedo de la vergüenza a las personas, aunque si eso tiene que suceder, la autopsia es el mejor lugar para ello.

Una autopsia se realiza como cualquier otra reunión con algunas pautas adicionales [Derb2006]:

Conseguir una persona que modere y que no sea parte del proyecto

y no tenga interés en serlo.

Reservar una hora y solo una hora.

Según mi experiencia, nada útil se dice en los primeros diez minutos de la primera autopsia de alguien, dado que las personas son naturalmente un poco tímidas para alabar o condenar su propio trabajo. Igualmente, no se dice nada útil después de la primera hora: si aún sigues hablando, probablemente sea porque una o dos personas tienen cosas que quieren sacarse del pecho en lugar de dar sugerencias para poder mejorar.

Requerir asistencia.

Todos los que formaron parte del proyecto deben estar en la sala para la autopsia. Esto es más importante de lo que piensas: las personas que tienen más que aprender de la autopsia en general son menos propensas a presentarse si la reunión es opcional.

Confeccionar dos listas.

Cuando estoy moderando pongo los encabezados “ Hazlo otra vez ” y “ Hazlo diferente ” en la pizarra, luego pido a cada persona que me dé una respuesta para cada lista, en orden y sin repetir nada que ya se haya dicho.

Comentar sobre acciones en lugar de individuos.

Para cuando el proyecto esté terminado es posible que algunas personas ya no sean amigas. No dejes que esto desvíe la reunión: si alguien tiene una queja específica sobre otro miembro del equipo, pídeles que critiquen un evento o decisión en particular. “Tiene una mala actitud” no ayuda a nadie a mejorar.

Priorizar las recomendaciones.

Una vez que los pensamientos de todos estén al descubierto ordénalos según cuáles son los más importantes de mantener y cuáles son los más importantes para cambiar. Probablemente solo podrás abordar uno o dos de cada lista en tu próximo proyecto, pero si haces eso cada vez tu vida mejorará rápidamente.

Listas de verificación y plantillas

[Gawa2007] hizo popular la idea de que usar listas de verificación puede salvar vidas, y estudios más recientes apoyan su efectividad  [Avel2013,Urba2014,Rams2019]. Encontramos las listas de verificación útiles, en particular cuando hay docentes que recién se incorporan al equipo. Los ejemplos a continuación pueden servirte como material inicial a partir del cual desarrollar tus propias listas de verificación.

Enseñando a evaluar

Esta rúbrica fue diseñada para evaluar lo enseñado durante 5 a 10 minutos con diapositivas, programación en vivo o una combinación de ambas estrategias. Valora cada ítem como “sí,” “Más o menos,” “No,” or “No corresponde (N/A).”

Inicio Presente (usa N/A para otras respuestas)
Adecuada duración (10 a 30 segundos)
Se presenta
Presenta el tema que se trabajará
Describe los requisitos

Contenido

Objetivos claros/narrativa fluida
Lenguaje inclusivo
Ejemplos y tareas reales
Enseña buenas prácticas/utiliza el idioma del código
Señala un camino intermedio entre la Escila de la jerga y la Caribdis de la sobresimplificación

Dando la lección

Voz clara y entendible (usa “Más o menos” o “No” para acentos muy marcados)
Ritmo: ni muy rápido ni muy lento, no hace pausas largas o se interrumpe, no aparenta estar leyendo sus notas
Seguridad: no se pierde en el pozo de alquitrán de la incertidumbre ni tampoco en las colinas de estiércol de la condescendencia

Diapositivas

Usa diapositivas (utiliza N/A para otras respuestas)
Diapositivas y discurso se complementan uno al otro (programación dual)
Fuentes y colores legibles/sin bloques de texto abrumadores por su tamaño
Pantalla cambia frecuentemente (algo cada 30 segundos)
Adecuado uso de figuras

Programación en vivo

Usa programación en vivo (valora N/A para otras respuestas)
Código y discurso se complementan uno al otro
Fuentes y colores legibles/adecuada cantidad de código en pantalla
Uso de herramientas de forma adecuada
Resalta elementos clave del código
Analiza los errores

Cierre

Presente (valora N/A para otras respuestas)
Adecuada duración (10 a 30 segundos)
Resume puntos clave
Presenta un esquema general de los próximos pasos

En general

Puntos claramente conectados/flujo lógico
Hace que el tema sea interesante (i.e. no aburrido)
Comprende el tema

Evaluación del grupo docente

Esta rúbrica fue diseñada para evaluar el desempeño de individuos dentro de un grupo. Los ejemplos a continuación pueden servirte como material inicial a partir del cual desarrollar tus propias rúbricas. Valora cada ítem como “sí,” “Más o menos,” “No,” or “No corresponde (N/A).”

Comunicación Escucha atentamente y sin interrumpir
Aclara lo que se ha dicho para asegurar la comprensión
Articula ideas en forma clara y concisa
Argumenta adecuadamente sus ideas
Obtiene el apoyo de otros miembros del equipo

Toma de decisiones

Analiza los problemas desde diferentes puntos de vista
Aplica lógica para resolver problemas
Propone soluciones basadas en hechos y no en “corazonadas” o intuición
Invita a los miembros del equipo a proponer nuevas ideas
Genera nuevas ideas
Acepta cambios

Colaboración

Reconoce los problemas que elequipo necesita enfrentar y resolver
Trabaja para hallar soluciones que sean aceptables para todas las partes involucradas
Comparte el crédito del éxito con otros miembros del equipo
Promueve la participación entre todos los miembros del equipo
Acepta la crítica abiertamente y sin “ponerse a la defensiva”
Coopera con el equipo

Autogestión

Monitorea sus avances para asegurar que se alcancen los objetivos
Le da máxima prioridad a obtener resultados
Define tareas prioritarias para los encuentros de trabajo
Promueve que otros miembros del equipo manifiesten sus opiniones, incluso si no coinciden con las propias
Mantiene la atención durante la reunión
Usa eficientemente el tiempo de reunión
Sugiere formas de trabajar en las reuniones

Organización de eventos

Las listas de verificación a continuación pueden usarse antes, durante y después de un evento.

Programar el evento

  • Decidir si será presencial, virtual para un lugar, o virtual para más de un lugar.

  • Conversar con la/el disertante? sobre sus expectativas y asegurarse que están de acuerdo en cuanto a quién cubrirá los costos de traslado.

  • Definir quiénes podrán participar: ¿será el evento abierto a todas las personas? ¿restringido a miembros de una organización? ¿una situación intermedia?

  • Organizar quiénes serán docentes.

  • Organizar el espacio, incluyendo *breakout rooms* si fuera necesario.

  • Definir la fecha. Si fuera presencia, reservar lo relativo al viaje.

  • Conseguir nombres y direccione de e-mail de participantes a través de la/el disertante.

  • Asegurarse que la totalidad de las y los participantes esté registrada.

Construcción del evento

  • Crea una página web con los detalles del taller, que incluya fecha, lugar, y lo que las y los participantes deben traer consigo.

  • Confirma las necesidades espciales de las/los participantes.

  • Si el evento será virtual prueba el modo de videoconferencia, dos veces.

  • Asegúrate que las/los participantes tengan acceso a internet.

  • Crea un espacio para compartir apuntes y soluciones a los ejercicios (p.ej. un documento Google Doc).

  • Establece contacto con las/los asistentes por e-mail con un mensaje de bienvenida que contenga el link a la página del taller, lecturas sobre la temática, la descripción de la configuración que deba hacer, una lista e los elemtnos requeridos para el taller, y un mecanismo para establecer contacto con la/el disertante o docente durante el día.

Al comienzo del evento

  • Recuerda a las y los asistentes el código de conducta.

  • Toma lista y crea una lista de nombres para pegar en la página compartida para tomar notas.

  • Reparte *post its*.

  • Asegúrate que tengan acceso a internet.

  • Asegúrate que pueden acceder a la página compartida.

  • Registra información relevante sobre la identificación de las/los asistentes en sus perfiles online.

Al finalizar el evento

  • Actualiza la lista de participantes.

  • Lleva un registro del feedbak dado por las/los participantes.

  • Haz una copia de la página compartida.

Equipo de viaje

Aquí algunas cosas que las/los docentes llevan consigo a los talleres:

postits y caramelos para suavizar la garganta
zapatos cómodos y pequeña libreta de notas
adaptador de corriente eléctrica de repuesto y camisa de repuesto
desodorante y adaptadores para video
pegatinas (*stickers*) para computadoras y tus notas (impresas o en una tableta)
barrita de cereal o similar y antiácido (problema de comer al paso)
tarjeta de presentación y anteojos/lentes de contacto de repuesto
libreta y bolígrafo, y puntero láser
vaso térmico para té/café y marcadores de pizarra adicionales
cepillo de dientes o enjuague bucal y toallitas húmedas descartables (puede volcarse algo encima de tu ropa)

Al viajar muchas/os docentes llevan además zapatos deportivos, traje de baño, mat de yoga o el material que necesiten para hacer actividad física. También una conexión WiFi portátil por si la de la habitación no funciona, y alguna memoria USB con los instaladores del software que las/los estudiantes aprenderán.

Diseño de lecciones

Esta sección resume el diseño de lecciones por el método hacia atrás o *backward* en inglés, que fue desarrolado independientemente por  [Wigg2005,Bigg2011,Fink2013]. Propone una progresión paso a paso para ayudarte a pensar en qué hacer en cada uno y en el orden adecuado y proporciona ejercicios breves espaciados para que puedas reorientar o redirigir tu esfuerzo sin demasiadas sorpresas desagradables.

Del paso 2 en adelante será considerado en tu lección final por lo que no se trata de un desperdicio de esfuerzo: como se describió en el Chapter 6, construir ejercicios de práctica desde el comienzo te ayuda a asegurarte que todo lo que preguntes a las/los estudiantes contribuirá a los objetivos de la lección y que todo lo que necesitan saber está cubierto.

Los pasos se describen en orden creciente de detalle pero el proceso en sí es siempre iterativo. Con frecuencia volverás a revisar tus respuestas en trabajos anteriores a medida que resuelvas preguntas más avanzadas o te des cuenta que tu primera idea sobre cómo resolver algo no iba a funcionar de la manera que pensaste originalmente.

¿Para quién es esta lección?

Crea algunas/os estudiantes tipo (Section 6.1) o (mejor aún) elige de entre los que tú y tus colegas han creado para uso general. Cada estudiante tipo debe tener:

  1. un contexto general,

  2. lo que ya sabe,

  3. lo que cree que quiere saber y

  4. qué necesidades especiales tiene.

 
Ejercicio breve: resumen breve de a quién estas intentando ayudar.

¿Cuál es la idea principal?

Responde tres o cuatro de las preguntas a continuación solo enumerando elementos para ayudarte a descifrar el enfoque de la lección. No necesitas responder todas las preguntas, y puedes plantear y responder otras preguntas si creer que ayudarán, pero debes incluir sí o sí un par de respuestas a la primera pregunta. Además, en esta etapa puedes crear un mapa conceptual (Section 3.1).

  • ¿Qué problemas aprenderán a resolver?

  • ¿Cuáles conceptos y técnicas aprenderán?

  • ¿Cuáles herramientas tecnológicas, paquetes y funciones usarán?

  • ¿Qué terminos de la jerga definirás?

  • ¿Qué analogías usarás para explicar conceptos?

  • ¿Qué errores o conceptos equivocados esperas encontrar?

  • ¿Cuáles grupos de datos utilizarás?

 
Ejercicio breve enfoque general y sin detalles de la lección. Compártelo con una/un colega — una breve devolución en esta instancia puede ahorrar horas de esfuerzo más tarde.

¿Qué harán las/los estudiantes durante la lección?

Establezca los objetivos del Paso 2 escribiendo descripciones detalladas de algunos ejercicios que las/los estudiantes serán capaces de resolver al final de la lección. Hacer esto es análogo a test-driven development: en vez de trabajar en función de un conjunto de objetivos de aprendizaje (probablemente ambiguos), hazlo "hacia atrás": elabora ejemplos concretos de lo que quieres que puedan resolver tus estudiantes. Esto además permite dejar en evidencia requisitos técnicos necesarios que de otro modo podrían no descubrirse hasta que fuera demasiado tarde.

Para complementar la descripción detallada de los ejercicios escribe la descripción de uno o dos ejercicios para cada hora de lección como una lista de conceptos breve para mostrar qué tan rápido esperas que las/los estudiantes avancen. De nuevo, esto permitirá tener una visión realista sobre lo que asumiste de las/los estudiantes y ayudará a hacer evidentes los requisitos técnicos necesarios para resolver el ejercicio. Una manera de elaborar estos ejercicios adicionales es hacer una lista con las habilidades que necesitan para resolver los ejercicios principales y crear un ejercicio que aborde cada una.

 
Ejercicio breve: 1–2 ejercicios explicados de principio a fin que usen las habilidades que las/los estudiantes van a aprender, y una media docena de ejercicios con su solución esquematizada. Incluye soluciones completas para que puedas asegurarte que el programa que usen funciona.

¿Cómo están conectados los conceptos

Coloca los ejercicios que creaste en un orden lógico y a partir de ellos deriva el esquema general de una lección. El esquema debe tener 3–4 ítems por hora de clase con una evaluación formativa para cada uno. En esta etapa es común que modifiques las evaluaciones de forma que puedan basarse sobre las anteriores.

 
Ejercicio breve: el esquema de una lección. Es muy probable que te encuentres con que te habías olvidado de algunos elementos y que no están incluidos en tu trabajo hasta aquí, así que no te sorprendas si debes ir y venir varias veces.

Descripción general de la lección

Ahora puedes escribir la descripción general de la lección que incluya:

  • un párrafo de descripción (i.e. un discurso de venta para tus estudiantes),

  • media docena de objetivos de aprendizaje y

  • un resumen de los requisitos.

Hacer esto antes suele ser un esfuerzo inútil ya que el material que compone la lección aumenta, se recorta o cambia de lugar en las etapas anteriores.

 
Ejercicio breve: descripción del curso, objetivos de aprendizaje y requisitos.

Cuestionario pre-evaluación

Este cuestionario ayuda a las/los docentes a estimar el conocimiento previo sobre programación de las/los participantes de un taller introductorio a JavaScript. Las preguntas y respuestas son concretas y el cuestionario es corto, para que no resulte intimidante.

  1. ¿Cuál de estas opciones describe mejor tu experiencia con la programación en general?

    • No tengo ninguna experiencia.

    • He escrito unas pocas líneas de código alguna vez.

    • He escrito programas para uso personal de un par de páginas de extensión.

    • He escrito y mantenido porciones grandes de programas.

  2. ¿Cuál de estas opciones describe mejor tu experiencia con la programación en JavaScript?

    • No tengo ninguna experiencia.

    • He escrito unas pocas líneas de código alguna vez.

    • He escrito programas para uso personal de un par de páginas de extensión.

    • He escrito y mantenido porciones grandes de programas.

  3. ¿Cuál de estas opciones describe mejor cuán fácil te resultaría escribir un programa en el lenguaje de programación que prefieras para hallar el número más alto en una lista?

    • No sabría por dónde comenzar.

    • Podría resolverlo con prueba y error y realizando bastantes búsquedas en internet.

    • Lo resolvería rápido con poco o nada de ayuda externa.

  4. ¿Cuál de estas opciones describe mejor cuán fácil te resultaría escribir un programa en JavaScript para hallar y cambiar a mayúscula todos los títulos de una página web?

    • No sabría por dónde comenzar.

    • Podría resolverlo con prueba y error y realizando bastantes búsquedas en internet.

    • Lo resolvería rápido con poco o nada de ayuda externa.

  5. ¿Qué te gustaría saber o poder hacer al finalizar esta clase que no sabes o puedes hacer ahora?

Ejemplos de mapas conceptuales

Estos mapas conceptuales fueron creados por Amy Hodge de la Universidad de Stanford y se reutilizan con permiso.

Mapa conceptual desde el punto de vista los/las socios/as de la biblioteca
Mapa conceptual desde el punto de vista de la dirección de la biblitoteca
Mapa conceptual desde el punto de vista de los/las amigos/as de la biblioteca

Solución del ejercicio de particionar

Mira el último ejercicio en Chapter 3 para la representación completa de estos símbolos.

Representación particionada

References

[Abba2012] Abbate, Janet. Recoding Gender: Women’s Changing Participation in Computing. MIT Press, 2012. Describes the careers and accomplishments of the women who shaped the early history of computing, but have all too often been written out of that history.

[Abel2009] Abela, Andrew. “Chart Suggestions - a Thought Starter.” http://extremepresentation.typepad.com/files/choosing-a-good-chart-09.pdf, 2009. A graphical decision tree for choosing the right type of chart.

[Adam1975] Adams, Frank, and Myles Horton. Unearthing Seeds of Fire: The Idea of Highlander. Blair, 1975. A history of the Highlander Folk School and its founder, Myles Horton.

[Aike1975] Aiken, Edwin G., Gary S. Thomas, and William A. Shennum. “Memory for a Lecture: Effects of Notes, Lecture Rate, and Informational Density.” Journal of Educational Psychology 67, no. 3 (1975): 439–44. doi:10.1037/h0076613. An early landmark study showing that taking notes improved retention.

[Aiva2016] Aivaloglou, Efthimia, and Felienne Hermans. “How Kids Code and How We Know.” In 2016 International Computing Education Research Conference (ICER’16). Association for Computing Machinery (ACM), 2016. doi:10.1145/2960310.2960325. Presents an analysis of 250,000 Scratch projects.

[Dahl2018] Albright, Sarah Dahlby, Titus H. Klinge, and Samuel A. Rebelsky. “A Functional Approach to Data Science in CS1.” In 2018 Technical Symposium on Computer Science Education (SIGCSE’18). Association for Computing Machinery (ACM), 2018. doi:10.1145/3159450.3159550. Describes the design of a CS1 class built around data science.

[Alin1989] Alinsky, Saul D. Rules for Radicals: A Practical Primer for Realistic Radicals. Vintage, 1989. A widely-read guide to community organization written by one of the 20th Century’s great organizers.

[Alqa2017] Alqadi, Basma S., and Jonathan I. Maletic. “An Empirical Study of Debugging Patterns Among Novice Programmers.” In 2017 Technical Symposium on Computer Science Education (SIGCSE’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3017680.3017761. Reports patterns in the debugging activities and success rates of novice programmers.

[Alvi1999] Alvidrez, Jennifer, and Rhona S. Weinstein. “Early Teacher Perceptions and Later Student Academic Achievement.” Journal of Educational Psychology 91, no. 4 (1999): 731–46. doi:10.1037/0022-0663.91.4.731. An influential study of the effects of teachers’ perceptions of students on their later achievements.

[Ambr2010] Ambrose, Susan A., Michael W. Bridges, Michele DiPietro, Marsha C. Lovett, and Marie K. Norman. How Learning Works: Seven Research-Based Principles for Smart Teaching. Jossey-Bass, 2010. Summarizes what we know about education and why we believe it’s true, from cognitive psychology to social factors.

[Ande2001] Anderson, Lorin W., and David R. Krathwohl, eds. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. Longman, 2001. A widely-used revision to Bloom’s Taxonomy.

[Armo2008] Armoni, Michal, and David Ginat. “Reversing: A Fundamental Idea in Computer Science.” Computer Science Education 18, no. 3 (September 2008): 213–30. doi:10.1080/08993400802332670. Argues that the notion of reversing things is an unrecognized fundamental concept in computing education.

[Atki2000] Atkinson, Robert K., Sharon J. Derry, Alexander Renkl, and Donald Wortham. “Learning from Examples: Instructional Principles from the Worked Examples Research.” Review of Educational Research 70, no. 2 (June 2000): 181–214. doi:10.3102/00346543070002181. A comprehensive survey of worked examples research at the time.

[Auro2019] Aurora, Valerie, and Mary Gardiner. How to Respond to Code of Conduct Reports. Version 1.1. Frame Shift Consulting LLC, 2019. A short, practical guide to enforcing a Code of Conduct.

[Avel2013] Aveling, Emma-Louise, Peter McCulloch, and Mary Dixon-Woods. “A Qualitative Study Comparing Experiences of the Surgical Safety Checklist in Hospitals in High-Income and Low-Income Countries.” BMJ Open 3, no. 8 (August 2013). doi:10.1136/bmjopen-2013-003039. Reports the effectiveness of surgical checklist implementations in the UK and Africa.

[Bacc2013] Bacchelli, Alberto, and Christian Bird. “Expectations, Outcomes, and Challenges of Modern Code Review.” In 2013 International Conference on Software Engineering (ICSE’13), 2013. A summary of work on code review.

[Bari2017] Barik, Titus, Justin Smith, Kevin Lubick, Elisabeth Holmes, Jing Feng, Emerson Murphy-Hill, and Chris Parnin. “Do Developers Read Compiler Error Messages?” In 2017 International Conference on Software Engineering (ICSE’17). Institute of Electrical; Electronics Engineers (IEEE), 2017. doi:10.1109/icse.2017.59. Reports that developers do read error messages and doing so is as hard as reading source code: it takes 13-25% of total task time.

[Bark2015] Barker, Lecia, Christopher Lynnly Hovey, and Jane Gruning. “What Influences CS Faculty to Adopt Teaching Practices?” In 2015 Technical Symposium on Computer Science Education (SIGCSE’15). Association for Computing Machinery (ACM), 2015. doi:10.1145/2676723.2677282. Describes how computer science educators adopt new teaching practices.

[Bark2014] Barker, Lecia, Christopher Lynnly Hovey, and Leisa D. Thompson. “Results of a Large-Scale, Multi-Institutional Study of Undergraduate Retention in Computing.” In 2014 Frontiers in Education Conference (FIE’14). Institute of Electrical; Electronics Engineers (IEEE), 2014. doi:10.1109/fie.2014.7044267. Reports that meaningful assignments, faculty interaction with students, student collaboration on assignments, and (for male students) pace and workload relative to expectations drive retention in computing classes, while interactions with teaching assistants or with peers in extracurricular activities have little impact.

[Basi1987] Basili, Victor R., and Richard W. Selby. “Comparing the Effectiveness of Software Testing Strategies.” IEEE Transactions on Software Engineering SE-13, no. 12 (December 1987): 1278–96. doi:10.1109/tse.1987.232881. An early and influential summary of the effectiveness of code review.

[Basu2015] Basu, Soumya, Albert Wu, Brian Hou, and John DeNero. “Problems Before Solutions: Automated Problem Clarification at Scale.” In 2015 Conference on Learning @ Scale (L@S’15). Association for Computing Machinery (ACM), 2015. doi:10.1145/2724660.2724679. Describes a system in which students have to unlock test cases for their code by answering MCQs, and presents data showing that this is effective.

[Batt2018] Battestilli, Lina, Apeksha Awasthi, and Yingjun Cao. “Two-Stage Programming Projects: Individual Work Followed by Peer Collaboration.” In 2018 Technical Symposium on Computer Science Education (SIGCSE’18). Association for Computing Machinery (ACM), 2018. doi:10.1145/3159450.3159486. Reports that learning outcomes were improved by two-stage projects in which students work individually, then re-work the same problem in pairs.

[Baue2015] Bauer, Mark S., Laura Damschroder, Hildi Hagedorn, Jeffrey Smith, and Amy M. Kilbourne. “An Introduction to Implementation Science for the Non-Specialist.” BMC Psychology 3, no. 1 (September 2015). doi:10.1186/s40359-015-0089-9. Explains what implementation science is, using examples from the US Veterans Administration to illustrate.

[Beck2013] Beck, Leland, and Alexander Chizhik. “Cooperative Learning Instructional Methods for CS1: Design, Implementation, and Evaluation.” ACM Transactions on Computing Education 13, no. 3 (August 2013): 10:1–10:21. doi:10.1145/2492686. Reports that cooperative learning enhances learning outcomes and self-efficacy in CS1.

[Beck2014] Beck, Victoria. “Testing a Model to Predict Online Cheating—Much Ado About Nothing.” Active Learning in Higher Education 15, no. 1 (January 2014): 65–75. doi:10.1177/1469787413514646. Reports that cheating is no more likely in online courses than in face-to-face courses.

[Beck2016] Becker, Brett A., Graham Glanville, Ricardo Iwashima, Claire McDonnell, Kyle Goslin, and Catherine Mooney. “Effective Compiler Error Message Enhancement for Novice Programming Students.” Computer Science Education 26, nos. 2-3 (July 2016): 148–75. doi:10.1080/08993408.2016.1225464. Reports that improved error messages helped novices learn faster.

[Beni2017] Beniamini, Gal, Sarah Gingichashvili, Alon Klein Orbach, and Dror G. Feitelson. “Meaningful Identifier Names: The Case of Single-Letter Variables.” In 2017 International Conference on Program Comprehension (ICPC’17). Institute of Electrical; Electronics Engineers (IEEE), 2017. doi:10.1109/icpc.2017.18. Reports that use of single-letter variable names doesn’t affect ability to modify code, and that some single-letter variable names have implicit types and meanings.

[Benn2007a] Bennedsen, Jens, and Michael E. Caspersen. “Failure Rates in Introductory Programming.” ACM SIGCSE Bulletin 39, no. 2 (June 2007): 32. doi:10.1145/1272848.1272879. Reports that 67% of students pass CS1, with variation from 5% to 100%.

[Benn2007b] Bennedsen, Jens, and Carsten Schulte. “What Does ‘Objects-First’ Mean?: An International Study of Teachers’ Perceptions of Objects-First.” In 2007 Koli Calling Conference on Computing Education Research (Koli’07), 21–29, 2007. Teases out three meanings of “objects first” in computing education.

[Benn2000] Benner, Patricia. From Novice to Expert: Excellence and Power in Clinical Nursing Practice. Pearson, 2000. A classic study of clinical judgment and the development of expertise.

[Berg2012] Bergin, Joseph, Jane Chandler, Jutta Eckstein, Helen Sharp, Mary Lynn Manns, Klaus Marquardt, Marianna Sipos, Markus Völter, and Eugene Wallingford. Pedagogical Patterns: Advice for Educators. CreateSpace, 2012. A catalog of design patterns for teaching.

[Biel1995] Bielaczyc, Katerine, Peter L. Pirolli, and Ann L. Brown. “Training in Self-Explanation and Self-Regulation Strategies: Investigating the Effects of Knowledge Acquisition Activities on Problem Solving.” Cognition and Instruction 13, no. 2 (June 1995): 221–52. doi:10.1207/s1532690xci1302_3. Reports that training learners in self-explanation accelerates their learning.

[Bigg2011] Biggs, John, and Catherine Tang. Teaching for Quality Learning at University. Open University Press, 2011. A step-by-step guide to lesson development, delivery, and evaluation for people working in higher education.

[Bink2012] Binkley, Dave, Marcia Davis, Dawn Lawrie, Jonathan I. Maletic, Christopher Morrell, and Bonita Sharif. “The Impact of Identifier Style on Effort and Comprehension.” Empirical Software Engineering 18, no. 2 (May 2012): 219–76. doi:10.1007/s10664-012-9201-4. Reports that reading and understanding code is fundamentally different from reading prose, and that experienced developers are relatively unaffected by identifier style, but beginners benefit from the use of camel case (versus pothole case).

[Blik2014] Blikstein, Paulo, Marcelo Worsley, Chris Piech, Mehran Sahami, Steven Cooper, and Daphne Koller. “Programming Pluralism: Using Learning Analytics to Detect Patterns in the Learning of Computer Programming.” Journal of the Learning Sciences 23, no. 4 (October 2014): 561–99. doi:10.1080/10508406.2014.954750. Reports an attempt to categorize novice programmer behavior using machine learning that found interesting patterns on individual assignments.

[Bloo1984] Bloom, Benjamin S. “The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring.” Educational Researcher 13, no. 6 (June 1984): 4–16. doi:10.3102/0013189x013006004. Reports that students tutored one-to-one using mastery learning techniques perform two standard deviations better than those who learned through conventional lecture.

[Boha2011] Bohay, Mark, Daniel P. Blakely, Andrea K. Tamplin, and Gabriel A. Radvansky. “Note Taking, Review, Memory, and Comprehension.” American Journal of Psychology 124, no. 1 (2011): 63. doi:10.5406/amerjpsyc.124.1.0063. Reports that note-taking improves retention most at deeper levels of understanding.

[Boll2014] Bollier, David. Think Like a Commoner: A Short Introduction to the Life of the Commons. New Society Publishers, 2014. A short introduction to a widely-used model of governance.

[Borr2014] Borrego, Maura, and Charles Henderson. “Increasing the Use of Evidence-Based Teaching in STEM Higher Education: A Comparison of Eight Change Strategies.” Journal of Engineering Education 103, no. 2 (April 2014): 220–52. doi:10.1002/jee.20040. Categorizes different approaches to effecting change in higher education.

[DuBo1986] Boulay, Benedict Du. “Some Difficulties of Learning to Program.” Journal of Educational Computing Research 2, no. 1 (February 1986): 57–73. doi:10.2190/3lfx-9rrf-67t8-uvk9. Introduces the idea of a notional machine.

[Bria2015] Brian, Samuel A., Richard N. Thomas, James M. Hogan, and Colin Fidge. “Planting Bugs: A System for Testing Students’ Unit Tests.” In 2015 Conference on Innovation and Technology in Computer Science Education (ITiCSE’15). Association for Computing Machinery (ACM), 2015. doi:10.1145/2729094.2742631. Describes a tool for assessing students’ programs and unit tests and finds that students often write weak tests and misunderstand the role of unit testing.

[Broo2016] Brookfield, Stephen D., and Stephen Preskill. The Discussion Book: 50 Great Ways to Get People Talking. Jossey-Bass, 2016. Describes fifty different ways to get groups talking productively.

[Brop1983] Brophy, Jere E. “Research on the Self-Fulfilling Prophecy and Teacher Expectations.” Journal of Educational Psychology 75, no. 5 (1983): 631–61. doi:10.1037/0022-0663.75.5.631. A early, influential study of the effects of teachers’ perceptions on students’ achievements.

[Brow2007] Brown, Michael Jacoby. Building Powerful Community Organizations: A Personal Guide to Creating Groups That Can Solve Problems and Change the World. Long Haul Press, 2007. A practical guide to creating effective organizations in and for communities.

[Brow2017] Brown, Neil C. C., and Amjad Altadmri. “Novice Java Programming Mistakes.” ACM Transactions on Computing Education 17, no. 2 (May 2017). doi:10.1145/2994154. Summarizes the authors’ analysis of novice programming mistakes.

[Brow2018] Brown, Neil C. C., and Greg Wilson. “Ten Quick Tips for Teaching Programming.” PLoS Computational Biology 14, no. 4 (April 2018). doi:10.1371/journal.pcbi.1006023. A short summary of what we actually know about teaching programming and why we believe it’s true.

[DeBr2015] Bruyckere, Pedro De, Paul A. Kirschner, and Casper D. Hulshof. Urban Myths About Learning and Education. Academic Press, 2015. Describes and debunks some widely-held myths about how people learn.

[Buff2015] Buffardi, Kevin, and Stephen H. Edwards. “Reconsidering Automated Feedback: A Test-Driven Approach.” In 2015 Technical Symposium on Computer Science Education (SIGCSE’15). Association for Computing Machinery (ACM), 2015. doi:10.1145/2676723.2677313. Describes a system that associates failed tests with particular features in a learner’s code so that learners cannot game the system.

[Burg2015] Burgstahler, Sheryl E. Universal Design in Higher Education: From Principles to Practice. Second. Harvard Education Press, 2015. Describes how to make online teaching materials accessible to everyone.

[Burk2018] Burke, Quinn, Cinamon Bailey, Louise Ann Lyon, and Emily Greeen. “Understanding the Software Development Industry’s Perspective on Coding Boot Camps Versus Traditional 4-Year Colleges.” In 2018 Technical Symposium on Computer Science Education (SIGCSE’18). Association for Computing Machinery (ACM), 2018. doi:10.1145/3159450.3159485. Compares the skills and credentials that tech industry recruiters are looking for to those provided by 4-year degrees and bootcamps.

[Butl2017] Butler, Zack, Ivona Bezakova, and Kimberly Fluet. “Pencil Puzzles for Introductory Computer Science.” In 2017 Technical Symposium on Computer Science Education (SIGCSE’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3017680.3017765. Describes pencil-and-paper puzzles that can be turned into CS1/CS2 assignments, and reports that they are enjoyed by students and encourage meta-cognition.

[Byck2005] Byckling, Pauli, Petri Gerdt, and Jorma Sajaniemi. “Roles of Variables in Object-Oriented Programming.” In 2005 Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA’05). Association for Computing Machinery (ACM), 2005. doi:10.1145/1094855.1094972. Presents single-variable design patterns common in novice programs.

[Camp2016] Campbell, Jennifer, Diane Horton, and Michelle Craig. “Factors for Success in Online CS1.” In 2016 Conference on Innovation and Technology in Computer Science Education (ITiCSE’16). Association for Computing Machinery (ACM), 2016. doi:10.1145/2899415.2899457. Compares students who opted in to an online CS1 class online with those who took it in person in a flipped classroom.

[Carr2014] Carroll, John. “Creating Minimalist Instruction.” International Journal of Designs for Learning 5, no. 2 (November 2014). doi:10.14434/ijdl.v5i2.12887. A look back on the author’s work on minimalist instruction.

[Carr1987] Carroll, John, Penny Smith-Kerker, James Ford, and Sandra Mazur-Rimetz. “The Minimal Manual.” Human-Computer Interaction 3, no. 2 (June 1987): 123–53. doi:10.1207/s15327051hci0302_2. The foundational paper on minimalist instruction.

[Cart2017] Carter, Adam Scott, and Christopher David Hundhausen. “Using Programming Process Data to Detect Differences in Students’ Patterns of Programming.” In 2017 Technical Symposium on Computer Science Education (SIGCSE’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3017680.3017785. Shows that students of different levels approach programming tasks differently, and that these differences can be detected automatically.

[Casp2007] Caspersen, Michael E., and Jens Bennedsen. “Instructional Design of a Programming Course.” In 2007 International Computing Education Research Conference (ICER’07). Association for Computing Machinery (ACM), 2007. doi:10.1145/1288580.1288595. Goes from a model of human cognition to three learning theories, and from there to the design of an introductory object-oriented programming course.

[Cele2018] Celepkolu, Mehmet, and Kristy Elizabeth Boyer. “Thematic Analysis of Students’ Reflections on Pair Programming in CS1.” In 2018 Technical Symposium on Computer Science Education (SIGCSE’18). Association for Computing Machinery (ACM), 2018. doi:10.1145/3159450.3159516. Reports that pair programming has the same learning gains side-by-side programming but higher student satisfaction.

[Ceti2016] Cetin, Ibrahim, and Christine Andrews-Larson. “Learning Sorting Algorithms Through Visualization Construction.” Computer Science Education 26, no. 1 (January 2016): 27–43. doi:10.1080/08993408.2016.1160664. Reports that people learn more from constructing algorithm visualizations than they do from viewing visualizations constructed by others.

[Chen2018] Chen, Chen, Paulina Haduong, Karen Brennan, Gerhard Sonnert, and Philip Sadler. “The Effects of First Programming Language on College Students’ Computing Attitude and Achievement: A Comparison of Graphical and Textual Languages.” Computer Science Education 29, no. 1 (November 2018): 23–48. doi:10.1080/08993408.2018.1547564. Finds that students whose first language was graphical had higher grades than students whose first language was textual when the languages were introduced in or before early adolescent years.

[Chen2009] Chen, Nicholas, and Maurice Rabb. “A Pattern Language for Screencasting.” In 2009 Conference on Pattern Languages of Programs (PLoP’09). Association for Computing Machinery (ACM), 2009. doi:10.1145/1943226.1943234. A brief, well-organized collection of tips for making screencasts.

[Chen2017] Cheng, Nick, and Brian Harrington. “The Code Mangler: Evaluating Coding Ability Without Writing Any Code.” In 2017 Technical Symposium on Computer Science Education (SIGCSE’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3017680.3017704. Reports that student performance on exercises in which they undo code mangling correlates strongly with performance on traditional assessments.

[Cher2007] Cherubini, Mauro, Gina Venolia, Rob DeLine, and Amy J. Ko. “Let’s Go to the Whiteboard: How and Why Software Developers Use Drawings.” In 2007 Conference on Human Factors in Computing Systems (CHI’07). Association for Computing Machinery (ACM), 2007. doi:10.1145/1240624.1240714. Reports that developers draw diagrams to aid discussion rather than to document designs.

[Cher2009] Cheryan, Sapna, Victoria C. Plaut, Paul G. Davies, and Claude M. Steele. “Ambient Belonging: How Stereotypical Cues Impact Gender Participation in Computer Science.” Journal of Personality and Social Psychology 97, no. 6 (2009): 1045–60. doi:10.1037/a0016239. Reports that subtle environmental clues have a measurable impact on the interest that people of different genders have in computing.

[Chet2014] Chetty, Raj, John N. Friedman, and Jonah E. Rockoff. “Measuring the Impacts of Teachers II: Teacher Value-Added and Student Outcomes in Adulthood.” American Economic Review 104, no. 9 (September 2014): 2633–79. doi:10.1257/aer.104.9.2633. Reports that good teachers have a small but measurable impact on student outcomes.

[Chi1989] Chi, Michelene T. H., Miriam Bassok, Matthew W. Lewis, Peter Reimann, and Robert Glaser. “Self-Explanations: How Students Study and Use Examples in Learning to Solve Problems.” Cognitive Science 13, no. 2 (April 1989): 145–82. doi:10.1207/s15516709cog1302_1. A seminal paper on the power of self-explanation.

[Coll1991] Collins, Allan, John Seely Brown, and Ann Holum. “Cognitive Apprenticeship: Making Thinking Visible.” American Educator 6 (1991): 38–46. Describes an educational model based on the notion of apprenticeship and master guidance.

[Coco2018] Community Organizations, Center for. “The ‘Problem’ Woman of Colour in the Workplace.” https://coco-net.org/problem-woman-colour-nonprofit-organizations/, 2018. Outlines the experience of many women of color in the workplace.

[Coom2012] Coombs, Norman. Making Online Teaching Accessible. Jossey-Bass, 2012. An accessible guide to making online lessons accessible.

[Covi2017] Covington, Martin V., Linda M. von Hoene, and Dominic J. Voge. Life Beyond Grades: Designing College Courses to Promote Intrinsic Motivation. Cambridge University Press, 2017. Explores ways of balancing intrinsic and extrinsic motivation in institutional education.

[Craw2010] Crawford, Matthew B. Shop Class as Soulcraft: An Inquiry into the Value of Work. Penguin, 2010. A deep analysis of what we learn about ourselves by doing certain kinds of work.

[Crou2001] Crouch, Catherine H., and Eric Mazur. “Peer Instruction: Ten Years of Experience and Results.” American Journal of Physics 69, no. 9 (September 2001): 970–77. doi:10.1119/1.1374249. Reports results from the first ten years of peer instruction in undergraduate physics classes, and describes ways in which its implementation changed during that time.

[Csik2008] Csikszentmihaly, Mihaly. Flow: The Psychology of Optimal Experience. Harper, 2008. An influential discussion of what it means to be fully immersed in a task.

[Cunn2017] Cunningham, Kathryn, Sarah Blanchard, Barbara J. Ericson, and Mark Guzdial. “Using Tracing and Sketching to Solve Programming Problems.” In 2017 Conference on International Computing Education Research (ICER’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3105726.3106190. Found that writing new values near variables’ names as they change is the most effective tracing technique.

[Cutt2017] Cutts, Quintin, Charles Riedesel, Elizabeth Patitsas, Elizabeth Cole, Peter Donaldson, Bedour Alshaigy, Mirela Gutica, Arto Hellas, Edurne Larraza-Mendiluze, and Robert McCartney. “Early Developmental Activities and Computing Proficiency.” In 2017 Conference on Innovation and Technology in Computer Science Education (ITiCSE’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3174781.3174789. Surveyed adult computer users about childhood activities and found strong correlation between confidence and computer use based on reading on one’s own and playing with construction toys with no moving parts (like Lego).

[Dage2010] Dagenais, Barthélémy, Harold Ossher, Rachel K. E. Bellamy, Martin P. Robillard, and Jacqueline P. de Vries. “Moving into a New Software Project Landscape.” In 2010 International Conference on Software Engineering (ICSE’10). ACM Press, 2010. doi:10.1145/1806799.1806842. A look at how people move from one project or domain to another.

[Deb2018] Deb, Debzani, Muztaba Fuad, James Etim, and Clay Gloster. “MRS: Automated Assessment of Interactive Classroom Exercises.” In 2018 Technical Symposium on Computer Science Education (SIGCSE’18). Association for Computing Machinery (ACM), 2018. doi:10.1145/3159450.3159607. Reports that doing in-class exercises with realtime feedback using mobile devices improved concept retention and student engagement while reducing failure rates.

[Denn2019] Denny, Paul, Brett A. Becker, Michelle Craig, Greg Wilson, and Piotr Banaszkiewicz. “Research This! Questions That Computing Educators Most Want Computing Education Researchers to Answer.” In 2019 Conference on International Computing Education Research (ICER’19). Association for Computing Machinery (ACM), 2019. Found little overlap between the questions that computing education researchers are most interested in and the questions practitioners want answered.

[Derb2006] Derby, Esther, and Diana Larsen. Agile Retrospectives: Making Good Teams Great. Pragmatic Bookshelf, 2006. Describes how to run a good project retrospective.

[Deve2018] Devenyi, Gabriel A., Rémi Emonet, Rayna M. Harris, Kate L. Hertweck, Damien Irving, Ian Milligan, and Greg Wilson. “Ten Simple Rules for Collaborative Lesson Development.” PLoS Computational Biology 14, no. 3 (March 2018). doi:10.1371/journal.pcbi.1005963. Describes how to develop lessons together.

[Dida2016] Didau, David, and Nick Rose. What Every Teacher Needs to Know About Psychology. John Catt Educational, 2016. An informative, opinionated explanation of what modern psychology has to say about teaching.

[DiSa2014a] DiSalvo, Betsy, Mark Guzdial, Amy Bruckman, and Tom McKlin. “Saving Face While Geeking Out: Video Game Testing as a Justification for Learning Computer Science.” Journal of the Learning Sciences 23, no. 3 (July 2014): 272–315. doi:10.1080/10508406.2014.893434. Found that 65% of male African-American participants in a game testing program went on to study computing.

[DiSa2014b] DiSalvo, Betsy, Cecili Reid, and Parisa Khanipour Roshan. “They Can’t Find Us.” In 2014 Technical Symposium on Computer Science Education (SIGCSE’14). Association for Computing Machinery (ACM), 2014. doi:10.1145/2538862.2538933. Reports that the search terms parents were likely to use for out-of-school CS classes didn’t actually find those classes.

[Douc2005] Douce, Christopher, David Livingstone, and James Orwell. “Automatic Test-Based Assessment of Programming.” Journal on Educational Resources in Computing 5, no. 3 (September 2005). doi:10.1145/1163405.1163409. Reviews the state of auto-graders at the time.

[Edwa2014b] Edwards, Stephen H., and Zalia Shams. “Do Student Programmers All Tend to Write the Same Software Tests?” In 2014 Conference on Innovation and Technology in Computer Science Education (ITiCSE’14). Association for Computing Machinery (ACM), 2014. doi:10.1145/2591708.2591757. Reports that students wrote tests for the happy path rather than to detect hidden bugs.

[Edwa2014a] Edwards, Stephen H., Zalia Shams, and Craig Estep. “Adaptively Identifying Non-Terminating Code When Testing Student Programs.” In 2014 Technical Symposium on Computer Science Education (SIGCSE’14). Association for Computing Machinery (ACM), 2014. doi:10.1145/2538862.2538926. Describes an adaptive scheme for detecting non-terminating student coding submissions.

[Endr2014] Endrikat, Stefan, Stefan Hanenberg, Romain Robbes, and Andreas Stefik. “How Do API Documentation and Static Typing Affect API Usability?” In 2014 International Conference on Software Engineering (ICSE’14). ACM Press, 2014. doi:10.1145/2568225.2568299. Shows that types do add complexity to programs, but it pays off fairly quickly by acting as documentation hints for a method’s use.

[Ensm2003] Ensmenger, Nathan L. “Letting the ‘Computer Boys’ Take over: Technology and the Politics of Organizational Transformation.” International Review of Social History 48, no. S11 (December 2003): 153–80. doi:10.1017/s0020859003001305. Describes how programming was turned from a female into a male profession in the 1960s.

[Ensm2012] ———. The Computer Boys Take over: Computers, Programmers, and the Politics of Technical Expertise. MIT Press, 2012. Traces the emergence and rise of computer experts in the 20th Century, and particularly the way that computing became male-gendered.

[Eppl2006] Eppler, Martin J. “A Comparison Between Concept Maps, Mind Maps, Conceptual Diagrams, and Visual Metaphors as Complementary Tools for Knowledge Construction and Sharing.” Information Visualization 5, no. 3 (June 2006): 202–10. doi:10.1057/palgrave.ivs.9500131. Compares concept maps, mind maps, conceptual diagrams, and visual metaphors as learning tools.

[Epst2002] Epstein, Lewis Carroll. Thinking Physics: Understandable Practical Reality. Insight Press, 2002. An entertaining problem-based introduction to thinking like a physicist.

[Eric2017] Ericson, Barbara J., Lauren E. Margulieux, and Jochen Rick. “Solving Parsons Problems Versus Fixing and Writing Code.” In 2017 Koli Calling Conference on Computing Education Research (Koli’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3141880.3141895. Reports that solving 2D Parsons problems with distractors takes less time than writing or fixing code but has equivalent learning outcomes.

[Eric2015] Ericson, Barbara J., Steven Moore, Briana B. Morrison, and Mark Guzdial. “Usability and Usage of Interactive Features in an Online Ebook for CS Teachers.” In 2015 Workshop in Primary and Secondary Computing Education (Wipsce’15), 111–20. Association for Computing Machinery (ACM), 2015. doi:10.1145/2818314.2818335. Reports that learners are more likely to attempt Parsons Problems than nearby multiple choice questions in an ebook.

[Eric2016] Ericsson, K. Anders. “Summing up Hours of Any Type of Practice Versus Identifying Optimal Practice Activities.” Perspectives on Psychological Science 11, no. 3 (May 2016): 351–54. doi:10.1177/1745691616635600. A critique of a meta-study of deliberate practice based on the latter’s overly-broad inclusion of activities.

[Farm2006] Farmer, Eugene. “The Gatekeeper’s Guide, or How to Kill a Tool.” IEEE Software 23, no. 6 (November 2006): 12–13. doi:10.1109/ms.2006.174. Ten tongue-in-cheek rules for making sure that a new software tool doesn’t get adopted.

[Fehi2008] Fehily, Chris. SQL: Visual Quickstart Guide. Third. Peachpit Press, 2008. An introduction to SQL that is both a good tutorial and a good reference guide.

[Finc2012] Fincher, Sally, Brad Richards, Janet Finlay, Helen Sharp, and Isobel Falconer. “Stories of Change: How Educators Change Their Practice.” In 2012 Frontiers in Education Conference (FIE’12). Institute of Electrical; Electronics Engineers (IEEE), 2012. doi:10.1109/fie.2012.6462317. A detailed look at how educators actually adopt new teaching practices.

[Finc2019] Fincher, Sally, and Anthony Robins, eds. The Cambridge Handbook of Computing Education Research. Cambridge University Press, 2019. A 900-page summary of what we know about computing education.

[Finc2007] Fincher, Sally, and Josh Tenenberg. “Warren’s Question.” In 2007 International Computing Education Research Conference (ICER’07). Association for Computing Machinery (ACM), 2007. doi:10.1145/1288580.1288588. A detailed look at a particular instance of transferring a teaching practice.

[Fink2013] Fink, L. Dee. Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses. Jossey-Bass, 2013. A step-by-step guide to a systematic lesson design process.

[Fisc2015] Fischer, Lars, and Stefan Hanenberg. “An Empirical Investigation of the Effects of Type Systems and Code Completion on API Usability Using TypeScript and JavaScript in MS Visual Studio.” In 11th Symposium on Dynamic Languages (DLS’15). ACM Press, 2015. doi:10.1145/2816707.2816720. Found that static typing improved programmer efficiency independently of code completion.

[Fisl2014] Fisler, Kathi. “The Recurring Rainfall Problem.” In 2014 International Computing Education Research Conference (ICER’14). Association for Computing Machinery (ACM), 2014. doi:10.1145/2632320.2632346. Reports that students made fewer low-level errors when solving the Rainfall Problem in a functional language.

[Fitz2008] Fitzgerald, Sue, Gary Lewandowski, Renée McCauley, Laurie Murphy, Beth Simon, Lynda Thomas, and Carol Zander. “Debugging: Finding, Fixing and Flailing, a Multi-Institutional Study of Novice Debuggers.” Computer Science Education 18, no. 2 (June 2008): 93–116. doi:10.1080/08993400802114508. Reports that good undergraduate debuggers are good programmers but not necessarily vice versa, and that novices use tracing and testing rather than causal reasoning.

[Foge2005] Fogel, Karl. Producing Open Source Software: How to Run a Successful Free Software Project. O’Reilly Media, 2005. The definite guide to managing open source software development projects.

[Ford2016] Ford, Denae, Justin Smith, Philip J. Guo, and Chris Parnin. “Paradise Unplugged: Identifying Barriers for Female Participation on Stack Overflow.” In 2016 International Symposium on Foundations of Software Engineering (FSE’16). Association for Computing Machinery (ACM), 2016. doi:10.1145/2950290.2950331. Reports that lack of awareness of site features, feeling unqualified to answer questions, intimidating community size, discomfort interacting with or relying on strangers, and perception that they shouldn’t be slacking were seen as significantly more problematic by female Stack Overflow contributors rather than male ones.

[Fran2018] Frank-Bolton, Pablo, and Rahul Simha. “Docendo Discimus: Students Learn by Teaching Peers Through Video.” In 2018 Technical Symposium on Computer Science Education (SIGCSE’18). Association for Computing Machinery (ACM), 2018. doi:10.1145/3159450.3159466. Reports that students who make short videos to teach concepts to their peers have a significant increase in their own learning compared to those who only study the material or view videos.

[Free1972] Freeman, Jo. “The Tyranny of Structurelessness.” The Second Wave 2, no. 1 (1972). Points out that every organization has a power structure: the only question is whether it’s accountable or not.

[Free2014] Freeman, S., S. L. Eddy, M. McDonough, M. K. Smith, N. Okoroafor, H. Jordt, and M. P. Wenderoth. “Active Learning Increases Student Performance in Science, Engineering, and Mathematics.” Proc. National Academy of Sciences 111, no. 23 (May 2014): 8410–5. doi:10.1073/pnas.1319030111. Presents a meta-analysis of the benefits of active learning.

[Frie2016] Friend, Marilyn, and Lynne Cook. Interactions: Collaboration Skills for School Professionals. Eighth. Pearson, 2016. A textbook on how teachers can work with other teachers.

[Galp2002] Galpin, Vashti. “Women in Computing Around the World.” ACM SIGCSE Bulletin 34, no. 2 (June 2002). doi:10.1145/543812.543839. Looks at female participation in computing in 35 countries.

[Gauc2011] Gaucher, Danielle, Justin Friesen, and Aaron C. Kay. “Evidence That Gendered Wording in Job Advertisements Exists and Sustains Gender Inequality.” Journal of Personality and Social Psychology 101, no. 1 (2011): 109–28. doi:10.1037/a0022530. Reports that gendered wording in job recruitment materials can maintain gender inequality in traditionally male-dominated occupations.

[Gawa2011] Gawande, Atul. “Personal Best.” The New Yorker, October 3, 2011. Describes how having a coach can improve practice in a variety of fields.

[Gawa2007] ———. “The Checklist.” The New Yorker, December 10, 2007. Describes the life-saving effects of simple checklists.

[Gick1987] Gick, Mary L., and Keith J. Holyoak. “The Cognitive Basis of Knowledge Transfer.” In Transfer of Learning: Contemporary Research and Applications, edited by S. J. Cormier and J. D. Hagman, 9–46. Elsevier, 1987. doi:10.1016/b978-0-12-188950-0.50008-4. Finds that transference only comes with mastery.

[Gorm2014] Gormally, Cara, Mara Evans, and Peggy Brickman. “Feedback About Teaching in Higher Ed: Neglected Opportunities to Promote Change.” Cell Biology Education 13, no. 2 (June 2014): 187–99. doi:10.1187/cbe.13-12-0235. Summarizes best practices for providing instructional feedback, and recommends some specific strategies.

[Gree2014] Green, Elizabeth. Building a Better Teacher: How Teaching Works (and How to Teach It to Everyone). W. W. Norton & Company, 2014. Explains why educational reforms in the past fifty years has mostly missed the mark, and what we should do instead.

[Grif2016] Griffin, Jean M. “Learning by Taking Apart.” In 2016 Conference on Information Technology Education (SIGITE’16). ACM Press, 2016. doi:10.1145/2978192.2978231. Reports that people learn to program more quickly by deconstructing code than by writing it.

[Grov2017] Grover, Shuchi, and Satabdi Basu. “Measuring Student Learning in Introductory Block-Based Programming.” In 2017 Technical Symposium on Computer Science Education (SIGCSE’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3017680.3017723. Reports that middle-school children using blocks-based programming find loops, variables, and Boolean operators difficult to understand.

[Gull2004] Gulley, Ned. “In Praise of Tweaking.” Interactions 11, no. 3 (May 2004): 18. doi:10.1145/986253.986264. Describes an innovative collaborative coding contest.

[Guo2013] Guo, Philip J. “Online Python Tutor.” In 2013 Technical Symposium on Computer Science Education (SIGCSE’13). Association for Computing Machinery (ACM), 2013. doi:10.1145/2445196.2445368. Describes the design and use of a web-based execution visualization tool.

[Guo2014] Guo, Philip J., Juho Kim, and Rob Rubin. “How Video Production Affects Student Engagement.” In 2014 Conference on Learning @ Scale (L@S’14). Association for Computing Machinery (ACM), 2014. doi:10.1145/2556325.2566239. Measured learner engagement with MOOC videos and reports that short videos are more engaging than long ones and that talking heads are more engaging than tablet drawings.

[Guzd2013] Guzdial, Mark. “Exploring Hypotheses About Media Computation.” In 2013 International Computing Education Research Conference (ICER’13). Association for Computing Machinery (ACM), 2013. doi:10.1145/2493394.2493397. A look back on ten years of media computation research.

[Guzd2016] ———. “Five Principles for Programming Languages for Learners.” https://cacm.acm.org/blogs/blog-cacm/203554-five-principles-for-programming-languages-for-learners/fulltext, 2016. Explains how to choose a programming language for people who are new to programming.

[Guzd2015a] ———. Learner-Centered Design of Computing Education: Research on Computing for Everyone. Morgan & Claypool Publishers, 2015. Argues that we must design computing education for everyone, not just people who think they are going to become professional programmers.

[Guzd2015b] ———. “Top 10 Myths About Teaching Computer Science.” https://cacm.acm.org/blogs/blog-cacm/189498-top-10-myths-about-teaching-computer-science/fulltext, 2015. Ten things many people believe about teaching computing that simply aren’t true.

[Haar2017] Haaranen, Lassi. “Programming as a Performance - Live-Streaming and Its Implications for Computer Science Education.” In 2017 Conference on Innovation and Technology in Computer Science Education (ITiCSE’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3059009.3059035. An early look at live streaming of coding as a teaching technique.

[Hagg2016] Hagger, M. S., N. L. D. Chatzisarantis, H. Alberts, C. O. Anggono, C. Batailler, A. R. Birt, R. Brand, et al. “A Multilab Preregistered Replication of the Ego-Depletion Effect.” Perspectives on Psychological Science 11, no. 4 (2016): 546–73. doi:10.1177/1745691616652873. A meta-analysis that found insufficient evidence to substantiate the ego depletion effect.

[Hake1998] Hake, Richard R. “Interactive Engagement Versus Traditional Methods: A Six-Thousand-Student Survey of Mechanics Test Data for Introductory Physics Courses.” American Journal of Physics 66, no. 1 (January 1998): 64–74. doi:10.1119/1.18809. Reports the use of a concept inventory to measure the benefits of interactive engagement as a teaching technique.

[Hamo2017] Hamouda, Sally, Stephen H. Edwards, Hicham G. Elmongui, Jeremy V. Ernst, and Clifford A. Shaffer. “A Basic Recursion Concept Inventory.” Computer Science Education 27, no. 2 (April 2017): 121–48. doi:10.1080/08993408.2017.1414728. Reports early work on developing a concept inventory for recursion.

[Hank2011] Hanks, Brian, Sue Fitzgerald, Renée McCauley, Laurie Murphy, and Carol Zander. “Pair Programming in Education: A Literature Review.” Computer Science Education 21, no. 2 (June 2011): 135–73. doi:10.1080/08993408.2011.579808. Reports increased success rates and retention with pair programming, with some evidence that it is particularly beneficial for women, but finds that scheduling and partner compatibility can be problematic.

[Hann2010] Hannay, Jo Erskine, Erik Arisholm, Harald Engvik, and Dag I. K. Sjøberg. “Effects of Personality on Pair Programming.” IEEE Transactions on Software Engineering 36, no. 1 (January 2010): 61–80. doi:10.1109/tse.2009.41. Reports weak correlation between the “Big Five” personality traits and performance in pair programming.

[Hann2009] Hannay, Jo Erskine, Tore Dybå, Erik Arisholm, and Dag I. K. Sjøberg. “The Effectiveness of Pair Programming: A Meta-Analysis.” Information and Software Technology 51, no. 7 (July 2009): 1110–22. doi:10.1016/j.infsof.2009.02.001. A comprehensive meta-analysis of research on pair programming.

[Hans2015] Hansen, John D., and Justin Reich. “Democratizing Education? Examining Access and Usage Patterns in Massive Open Online Courses.” Science 350, no. 6265 (December 2015): 1245–8. doi:10.1126/science.aab3782. Reports that MOOCs are mostly used by the affluent.

[Harm2016] Harms, Kyle James, Jason Chen, and Caitlin L. Kelleher. “Distractors in Parsons Problems Decrease Learning Efficiency for Young Novice Programmers.” In 2016 International Computing Education Research Conference (ICER’16). Association for Computing Machinery (ACM), 2016. doi:10.1145/2960310.2960314. Shows that adding distractors to Parsons Problems does not improve learning outcomes but increases solution times.

[Harr2018] Harrington, Brian, and Nick Cheng. “Tracing Vs. Writing Code: Beyond the Learning Hierarchy.” In 2018 Technical Symposium on Computer Science Education (SIGCSE’18). Association for Computing Machinery (ACM), 2018. doi:10.1145/3159450.3159530. Finds that the gap between being able to trace code and being able to write it has largely closed by CS2, and that students who still have a gap (in either direction) are likely to do poorly in the course.

[Hazz2014] Hazzan, Orit, Tami Lapidot, and Noa Ragonis. Guide to Teaching Computer Science: An Activity-Based Approach. Second. Springer, 2014. A textbook for teaching computer science at the K-12 level with dozens of activities.

[Hend2015a] Henderson, Charles, Renée Cole, Jeff Froyd, Debra Friedrichsen, Raina Khatri, and Courtney Stanford. Designing Educational Innovations for Sustained Adoption. Increase the Impact, 2015. A detailed analysis of strategies for getting institutions in higher education to make changes.

[Hend2015b] ———. “Designing Educational Innovations for Sustained Adoption (Executive Summary).” http://www.increasetheimpact.com/resources.html; Increase the Impact, 2015. A short summary of key points from the authors’ work on effecting change in higher education.

[Hend2017] Hendrick, Carl, and Robin Macpherson. What Does This Look Like in the Classroom?: Bridging the Gap Between Research and Practice. John Catt Educational, 2017. A collection of responses by educational experts to questions asked by classroom teachers, with prefaces by the authors.

[Henr2010] Henrich, Joseph, Steven J. Heine, and Ara Norenzayan. “The Weirdest People in the World?” Behavioral and Brain Sciences 33, nos. 2-3 (June 2010): 61–83. doi:10.1017/s0140525x0999152x. Points out that the subjects of most published psychological studies are Western, educated, industrialized, rich, and democratic.

[Hest1992] Hestenes, David, Malcolm Wells, and Gregg Swackhamer. “Force Concept Inventory.” The Physics Teacher 30, no. 3 (March 1992): 141–58. doi:10.1119/1.2343497. Describes the Force Concept Inventory’s motivation, design, and impact.

[Hick2018] Hicks, Marie. Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing. MIT Press, 2018. Describes how Britain lost its early dominance in computing by systematically discriminating against its most qualified workers: women.

[Hofm2017] Hofmeister, Johannes, Janet Siegmund, and Daniel V. Holt. “Shorter Identifier Names Take Longer to Comprehend.” In 2017 Conference on Software Analysis, Evolution and Reengineering (SANER’17). Institute of Electrical; Electronics Engineers (IEEE), 2017. doi:10.1109/saner.2017.7884623. Reports that using words for variable names makes comprehension faster than using abbreviations or single-letter names for variables.

[Holl1960] Hollingsworth, Jack. “Automatic Graders for Programming Classes.” Communications of the ACM 3, no. 10 (October 1960): 528–29. doi:10.1145/367415.367422. A brief note describing what may have been the world’s first auto-grader.

[Hu2017] Hu, Helen H., Cecily Heiner, Thomas Gagne, and Carl Lyman. “Building a Statewide Computer Science Teacher Pipeline.” In 2017 Technical Symposium on Computer Science Education (SIGCSE’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3017680.3017788. Reports that a six-month program for high school teachers converting to teach CS quadruples the number of teachers without noticeable reduction of student outcomes and increases teachers’ belief that anyone can program.

[Hust2012] Huston, Therese. Teaching What You Don’t Know. Harvard University Press, 2012. A pointed, funny, and very useful exploration of exactly what the title says.

[Ihan2010] Ihantola, Petri, Tuukka Ahoniemi, Ville Karavirta, and Otto Seppälä. “Review of Recent Systems for Automatic Assessment of Programming Assignments.” In 2010 Koli Calling Conference on Computing Education Research (Koli’10). Association for Computing Machinery (ACM), 2010. doi:10.1145/1930464.1930480. Reviews auto-grading tools of the time.

[Ihan2011] Ihantola, Petri, and Ville Karavirta. “Two-Dimensional Parson’s Puzzles: The Concept, Tools, and First Observations.” Journal of Information Technology Education: Innovations in Practice 10 (2011): 119–32. doi:10.28945/1394. Describes a 2D Parsons Problem tool and early experiences with it that confirm that experts solve outside-in rather than line-by-line.

[Ihan2016] Ihantola, Petri, Kelly Rivers, Miguel Ángel Rubio, Judy Sheard, Bronius Skupas, Jaime Spacco, Claudia Szabo, et al. “Educational Data Mining and Learning Analytics in Programming: Literature Review and Case Studies.” In 2016 Conference on Innovation and Technology in Computer Science Education (ITiCSE’16). Association for Computing Machinery (ACM), 2016. doi:10.1145/2858796.2858798. A survey of methods used in mining and analyzing programming data.

[Ijss2000] IJsselsteijn, Wijnand A., Huib de Ridder, Jonathan Freeman, and Steve E. Avons. “Presence: Concept, Determinants, and Measurement.” In 2000 Conference on Human Vision and Electronic Imaging, edited by Bernice E. Rogowitz and Thrasyvoulos N. Pappas. SPIE, 2000. doi:10.1117/12.387188. Summarizes thinking of the time about real and virtual presence.

[Irib2009] Iriberri, Alicia, and Gondy Leroy. “A Life-Cycle Perspective on Online Community Success.” ACM Computing Surveys 41, no. 2 (February 2009): 1–29. doi:10.1145/1459352.1459356. Reviews research on online communities organized according to a five-stage lifecycle model.

[Juss2005] Jussim, Lee, and Kent D. Harber. “Teacher Expectations and Self-Fulfilling Prophecies: Knowns and Unknowns, Resolved and Unresolved Controversies.” Personality and Social Psychology Review 9, no. 2 (May 2005): 131–55. doi:10.1207/s15327957pspr0902_3. A survey of the effects of teacher expectations on student outcomes.

[Kaly2003] Kalyuga, Slava, Paul Ayres, Paul Chandler, and John Sweller. “The Expertise Reversal Effect.” Educational Psychologist 38, no. 1 (March 2003): 23–31. doi:10.1207/s15326985ep3801_4. Reports that instructional techniques that work well with inexperienced learners lose their effectiveness or have negative consequences when used with more experienced learners.

[Kaly2015] Kalyuga, Slava, and Anne-Marie Singh. “Rethinking the Boundaries of Cognitive Load Theory in Complex Learning.” Educational Psychology Review 28, no. 4 (December 2015): 831–52. doi:10.1007/s10648-015-9352-0. Argues that cognitive load theory is basically micro-management within a broader pedagogical context.

[Kang2016] Kang, Sean H. K. “Spaced Repetition Promotes Efficient and Effective Learning.” Policy Insights from the Behavioral and Brain Sciences 3, no. 1 (January 2016): 12–19. doi:10.1177/2372732215624708. Summarizes research on spaced repetition and what it means for classroom teaching.

[Kapu2016] Kapur, Manu. “Examining Productive Failure, Productive Success, Unproductive Failure, and Unproductive Success in Learning.” Educational Psychologist 51, no. 2 (April 2016): 289–99. doi:10.1080/00461520.2016.1155457. Looks at productive failure as an alternative to inquiry-based learning and approaches based on cognitive load theory.

[Karp2008] Karpicke, Jeffrey D., and Henry L. Roediger. “The Critical Importance of Retrieval for Learning.” Science 319, no. 5865 (February 2008): 966–68. doi:10.1126/science.1152408. Reports that repeated testing improves recall of word lists from 35% to 80%, even when learners can still access the material but are not tested on it.

[Kauf2000] Kaufman, Deborah B., and Richard M. Felder. “Accounting for Individual Effort in Cooperative Learning Teams.” Journal of Engineering Education 89, no. 2 (2000). Reports that self-rating and peer ratings in undergraduate courses agree, that collusion isn’t significant, that students don’t inflate their self-ratings, and that ratings are not biased by gender or race.

[Keme2009] Kemerer, Chris F., and Mark C. Paulk. “The Impact of Design and Code Reviews on Software Quality: An Empirical Study Based on PSP Data.” IEEE Transactions on Software Engineering 35, no. 4 (July 2009): 534–50. doi:10.1109/tse.2009.27. Uses individual data to explore the effectiveness of code review.

[Kepp2008] Keppens, Jeroen, and David Hay. “Concept Map Assessment for Teaching Computer Programming.” Computer Science Education 18, no. 1 (March 2008): 31–42. doi:10.1080/08993400701864880. A short review of ways concept mapping can be used in CS education.

[Kern1999] Kernighan, Brian W., and Rob Pike. The Practice of Programming. Addison-Wesley, 1999. A programming style manual written by two of the creators of modern computing.

[Kern1983] ———. The Unix Programming Environment. Prentice-Hall, 1983. An influential early description of Unix.

[Kern1978] Kernighan, Brian W., and P. J. Plauger. The Elements of Programming Style. Second. McGraw-Hill, 1978. An early and influential description of the Unix programming philosophy.

[Kern1988] Kernighan, Brian W., and Dennis M. Ritchie. The c Programming Language. Second. Prentice-Hall, 1988. The book that made C a popular programming language.

[Keun2016a] Keuning, Hieke, Johan Jeuring, and Bastiaan Heeren. “Towards a Systematic Review of Automated Feedback Generation for Programming Exercises.” In 2016 Conference on Innovation and Technology in Computer Science Education (ITiCSE’16). Association for Computing Machinery (ACM), 2016. doi:10.1145/2899415.2899422. Reports that auto-grading tools often do not give feedback on what to do next, and that teachers cannot easily adapt most of the tools to their needs.

[Keun2016b] ———. “Towards a Systematic Review of Automated Feedback Generation for Programming Exercises - Extended Version.” Technical Report UU-CS-2016-001, Utrecht University, 2016. An extended look at feedback messages from auto-grading tools.

[Kim2017] Kim, Ada S., and Amy J. Ko. “A Pedagogical Analysis of Online Coding Tutorials.” In 2017 Technical Symposium on Computer Science Education (SIGCSE’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3017680.3017728. Reports that online coding tutorials largely teach similar content, organize content bottom-up, and provide goal-directed practices with immediate feedback, but are not tailored to learners’ prior coding knowledge and usually don’t tell learners how to transfer and apply knowledge.

[King1993] King, Alison. “From Sage on the Stage to Guide on the Side.” College Teaching 41, no. 1 (January 1993): 30–35. doi:10.1080/87567555.1993.9926781. An early proposal to flip the classroom.

[Kirk1994] Kirkpatrick, Donald L. Evaluating Training Programs: The Four Levels. Berrett-Koehle, 1994. Defines a widely-used four-level model for evaluating training.

[Kirs2006] Kirschner, Paul A., John Sweller, and Richard E. Clark. “Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching.” Educational Psychologist 41, no. 2 (June 2006): 75–86. doi:10.1207/s15326985ep4102_1. Argues that inquiry-based learning is less effective for novices than guided instruction.

[Kirs2018] Kirschner, Paul A., John Sweller, Femke Kirschner, and Jimmy Zambrano R. “From Cognitive Load Theory to Collaborative Cognitive Load Theory.” International Journal of Computer-Supported Collaborative Learning, April 2018. doi:10.1007/s11412-018-9277-y. Extends cognitive load theory to include collaborative aspects of learning.

[Kirs2013] Kirschner, Paul A., and Jeroen J. G. van Merriënboer. “Do Learners Really Know Best? Urban Legends in Education.” Educational Psychologist 48, no. 3 (July 2013): 169–83. doi:10.1080/00461520.2013.804395. Argues that three learning myths—digital natives, learning styles, and self-educators—all reflect the mistaken belief that learners know what is best for them, and cautions that we may be in a downward spiral in which every attempt by education researchers to rebut these myths confirms their opponents’ belief that learning science is pseudo-science.

[Koed2015] Koedinger, Kenneth R., Jihee Kim, Julianna Zhuxin Jia, Elizabeth A. McLaughlin, and Norman L. Bier. “Learning Is Not a Spectator Sport: Doing Is Better Than Watching for Learning from a Mooc.” In 2015 Conference on Learning @ Scale (L@S’15). Association for Computing Machinery (ACM), 2015. doi:10.1145/2724660.2724681. Measures the benefits of doing rather than watching.

[Koeh2013] Koehler, Matthew J., Punya Mishra, and William Cain. “What Is Technological Pedagogical Content Knowledge (TPACK)?” Journal of Education 193, no. 3 (2013): 13–19. doi:10.1177/002205741319300303. Refines the discussion of PCK by adding technology, and sketches strategies for building understanding of how to use it.

[Kohn2017] Kohn, Tobias. “Variable Evaluation: An Exploration of Novice Programmers’ Understanding and Common Misconceptions.” In 2017 Technical Symposium on Computer Science Education (SIGCSE’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3017680.3017724. Reports that students often believe in delayed evaluation or that entire equations are stored in variables.

[Koll2015] Kölling, Michael. “Lessons from the Design of Three Educational Programming Environments.” International Journal of People-Oriented Programming 4, no. 1 (January 2015): 5–32. doi:10.4018/ijpop.2015010102. Compares three generations of programming environments intended for novice use.

[Krau2016] Kraut, Robert E., and Paul Resnick. Building Successful Online Communities: Evidence-Based Social Design. MIT Press, 2016. Sums up what we actually know about making thriving online communities and why we believe it’s true.

[Krug1999] Kruger, Justin, and David Dunning. “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.” Journal of Personality and Social Psychology 77, no. 6 (1999): 1121–34. doi:10.1037/0022-3514.77.6.1121. The original report on the Dunning-Kruger effect: the less people know, the less accurate their estimate of their knowledge.

[Kuch2011] Kuchner, Marc J. Marketing for Scientists: How to Shine in Tough Times. Island Press, 2011. A short, readable guide to making people aware of, and care about, your work.

[Kuit2004] Kuittinen, Marja, and Jorma Sajaniemi. “Teaching Roles of Variables in Elementary Programming Courses.” ACM SIGCSE Bulletin 36, no. 3 (September 2004): 57. doi:10.1145/1026487.1008014. Presents a few patterns used in novice programming and the pedagogical value of teaching them.

[Kulk2013] Kulkarni, Chinmay, Koh Pang Wei, Huy Le, Daniel Chia, Kathryn Papadopoulos, Justin Cheng, Daphne Koller, and Scott R. Klemmer. “Peer and Self Assessment in Massive Online Classes.” ACM Transactions on Computer-Human Interaction 20, no. 6 (December 2013): 1–31. doi:10.1145/2505057. Shows that peer grading can be as effective at scale as expert grading.

[Laba2008] Labaree, David F. “The Winning Ways of a Losing Strategy: Educationalizing Social Problems in the United States.” Educational Theory 58, no. 4 (November 2008): 447–60. doi:10.1111/j.1741-5446.2008.00299.x. Explores why the United States keeps pushing the solution of social problems onto educational institutions, and why that continues not to work.

[Lach2018] Lachney, Michael. “Computational Communities: African-American Cultural Capital in Computer Science Education.” Computer Science Education, February 2018, 1–22. doi:10.1080/08993408.2018.1429062. Explores use of community representation and computational integration to bridge computing and African-American cultural capital in CS education.

[Lake2018] Lakey, George. How We Win: A Guide to Nonviolent Direct Action Campaigning. Melville House, 2018. A short experience-based guide to effective campaigning.

[Lang2013] Lang, James M. Cheating Lessons: Learning from Academic Dishonesty. Harvard University Press, 2013. Explores why students cheat, and how courses often give them incentives to do so.

[Lang2016] ———. Small Teaching: Everyday Lessons from the Science of Learning. Jossey-Bass, 2016. Presents a selection of accessible evidence-based practices that teachers can adopt when they have little time and few resources.

[Lazo1993] Lazonder, Ard W., and Hans van der Meij. “The Minimal Manual: Is Less Really More?” International Journal of Man-Machine Studies 39, no. 5 (November 1993): 729–52. doi:10.1006/imms.1993.1081. Reports that the minimal manual approach to instruction outperforms traditional approaches regardless of prior experience with computers.

[Leak2017] Leake, Mackenzie, and Colleen M. Lewis. “Recommendations for Designing CS Resource Sharing Sites for All Teachers.” In 2017 Technical Symposium on Computer Science Education (SIGCSE’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3017680.3017780. Explores why CS teachers don’t use resource sharing sites and recommends ways to make them more appealing.

[Lee2013] Lee, Cynthia Bailey. “Experience Report: CS1 in MATLAB for Non-Majors, with Media Computation and Peer Instruction.” In 2013 Technical Symposium on Computer Science Education (SIGCSE’13). Association for Computing Machinery (ACM), 2013. doi:10.1145/2445196.2445214. Describes an adaptation of media computation to a first-year MATLAB course.

[Lee2017] ———. “What Can I Do Today to Create a More Inclusive Community in CS?” http://bit.ly/2oynmSH, 2017. A practical checklist of things instructors can do to make their computing classes more inclusive.

[Lemo2014] Lemov, Doug. Teach Like a Champion 2.0: 62 Techniques That Put Students on the Path to College. Jossey-Bass, 2014. Presents 62 classroom techniques drawn from intensive study of thousands of hours of video of good teachers in action.

[Lewi2015] Lewis, Colleen M., and Niral Shah. “How Equity and Inequity Can Emerge in Pair Programming.” In 2015 International Computing Education Research Conference (ICER’15). Association for Computing Machinery (ACM), 2015. doi:10.1145/2787622.2787716. Reports a study of pair programming in a middle-grade classroom in which less equitable pairs were ones that sought to complete the task quickly.

[List2009] Lister, Raymond, Colin Fidge, and Donna Teague. “Further Evidence of a Relationship Between Explaining, Tracing and Writing Skills in Introductory Programming.” ACM SIGCSE Bulletin 41, no. 3 (August 2009): 161. doi:10.1145/1595496.1562930. Replicates earlier studies showing that students who cannot trace code usually cannot explain code and that students who tend to perform reasonably well at code writing tasks have also usually acquired the ability to both trace code and explain code.

[List2004] Lister, Raymond, Otto Seppälä, Beth Simon, Lynda Thomas, Elizabeth S. Adams, Sue Fitzgerald, William Fone, et al. “A Multi-National Study of Reading and Tracing Skills in Novice Programmers.” In 2004 Conference on Innovation and Technology in Computer Science Education (ITiCSE’04). Association for Computing Machinery (ACM), 2004. doi:10.1145/1044550.1041673. Reports that students are weak at both predicting the outcome of executing a short piece of code and at selecting the correct completion for short pieces of code.

[Litt2004] Littky, Dennis. The Big Picture: Education Is Everyone’s Business. Association for Supervision & Curriculum Development (ASCD), 2004. Essays on the purpose of education and how to make schools better.

[Luxt2009] Luxton-Reilly, Andrew. “A Systematic Review of Tools That Support Peer Assessment.” Computer Science Education 19, no. 4 (December 2009): 209–32. doi:10.1080/08993400903384844. Surveys peer assessment tools that may be of use in computing education.

[Luxt2017] Luxton-Reilly, Andrew, Jacqueline Whalley, Brett A. Becker, Yingjun Cao, Roger McDermott, Claudio Mirolo, Andreas Mühling, Andrew Petersen, Kate Sanders, and Simon. “Developing Assessments to Determine Mastery of Programming Fundamentals.” In 2017 Conference on Innovation and Technology in Computer Science Education (ITiCSE’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3174781.3174784. Synthesizes work from many previous works to determine what CS instructors are actually teaching, how those things depend on each other, and how they might be assessed.

[Macn2014] Macnamara, Brooke N., David Z. Hambrick, and Frederick L. Oswald. “Deliberate Practice and Performance in Music, Games, Sports, Education, and Professions: A Meta-Analysis.” Psychological Science 25, no. 8 (July 2014): 1608–18. doi:10.1177/0956797614535810. A meta-study of the effectiveness of deliberate practice.

[Magu2018] Maguire, Phil, Rebecca Maguire, and Robert Kelly. “Using Automatic Machine Assessment to Teach Computer Programming.” Computer Science Education, February 2018, 1–18. doi:10.1080/08993408.2018.1435113. Reports that weekly machine-evaluated tests are a better predictor of exam scores than labs (but that students didn’t like the system).

[Majo2015] Major, Claire Howell, Michael S. Harris, and Tod Zakrajsek. Teaching for Learning: 101 Intentionally Designed Educational Activities to Put Students on the Path to Success. Routledge, 2015. Catalogs a hundred different kinds of exercises to do with students.

[Malo2010] Maloney, John, Mitchel Resnick, Natalie Rusk, Brian Silverman, and Evelyn Eastmond. “The Scratch Programming Language and Environment.” ACM Transactions on Computing Education 10, no. 4 (November 2010): 1–15. doi:10.1145/1868358.1868363. Summarizes the design of the first generation of Scratch.

[Mann2015] Manns, Mary Lynn, and Linda Rising. Fearless Change: Patterns for Introducing New Ideas. Addison-Wesley, 2015. A catalog of patterns for making change happen in large organizations.

[Marc2011] Marceau, Guillaume, Kathi Fisler, and Shriram Krishnamurthi. “Measuring the Effectiveness of Error Messages Designed for Novice Programmers.” In 2011 Technical Symposium on Computer Science Education (SIGCSE’11). Association for Computing Machinery (ACM), 2011. doi:10.1145/1953163.1953308. Looks at edit-level responses to error messages, and introduces a useful rubric for classifying user responses to errors.

[Marg2015] Margaryan, Anoush, Manuela Bianco, and Allison Littlejohn. “Instructional Quality of Massive Open Online Courses (MOOCs).” Computers & Education 80 (January 2015): 77–83. doi:10.1016/j.compedu.2014.08.005. Reports that instructional design quality in MOOCs poor, but that the organization and presentation of material is good.

[Marg2010] Margolis, Jane, Rachel Estrella, Joanna Goode, Jennifer Jellison Holme, and Kim Nao. Stuck in the Shallow End: Education, Race, and Computing. MIT Press, 2010. Dissects the school structures and belief systems that lead to under-representation of African American and Latinx students in computing.

[Marg2003] Margolis, Jane, and Allan Fisher. Unlocking the Clubhouse: Women in Computing. MIT Press, 2003. A groundbreaking report on the gender imbalance in computing, and the steps Carnegie Mellon took to address the problem.

[Marg2016] Margulieux, Lauren E., Richard Catrambone, and Mark Guzdial. “Employing Subgoals in Computer Programming Education.” Computer Science Education 26, no. 1 (January 2016): 44–67. doi:10.1080/08993408.2016.1144429. Reports that labelled subgoals improve learning outcomes in introductory computing courses.

[Marg2012] Margulieux, Lauren E., Mark Guzdial, and Richard Catrambone. “Subgoal-Labeled Instructional Material Improves Performance and Transfer in Learning to Develop Mobile Applications.” In 2012 International Computing Education Research Conference (ICER’12), 71–78. ACM Press, 2012. doi:10.1145/2361276.2361291. Reports that labelled subgoals improve outcomes and transference when learning about mobile app development.

[Mark2018] Markovits, Rebecca A., and Yana Weinstein. “Can Cognitive Processes Help Explain the Success of Instructional Techniques Recommended by Behavior Analysts?” NPJ Science of Learning 3, no. 1 (January 2018). doi:10.1038/s41539-017-0018-1. Points out that behavioralists and cognitive psychologists differ in approach, but wind up making very similar recommendations about how to teach, and gives two specific examples.

[Mars2002] Marsh, Herbert W., and John Hattie. “The Relation Between Research Productivity and Teaching Effectiveness: Complementary, Antagonistic, or Independent Constructs?” Journal of Higher Education 73, no. 5 (2002): 603–41. doi:10.1353/jhe.2002.0047. One study of many showing there is zero correlation between research ability and teaching effectiveness.

[Masa2018] Masapanta-Carrión, Susana, and J. Ángel Velázquez-Iturbide. “A Systematic Review of the Use of Bloom’s Taxonomy in Computer Science Education.” In 2018 Technical Symposium on Computer Science Education (SIGCSE’18). Association for Computing Machinery (ACM), 2018. doi:10.1145/3159450.3159491. Reports that even experienced educators have trouble agreeing on the correct classification for a question or idea using Bloom’s Taxonomy.

[Maso2016] Mason, Raina, Carolyn Seton, and Graham Cooper. “Applying Cognitive Load Theory to the Redesign of a Conventional Database Systems Course.” Computer Science Education 26, no. 1 (January 2016): 68–87. doi:10.1080/08993408.2016.1160597. Reports how redesigning a database course using cognitive load theory reduced exam failure rate while increasing student satisfaction.

[Matt2019] Matthes, Eric. “Python Flash Cards: Syntax, Concepts, and Examples.” No Starch Press, 2019. Handy flashcards summarizing the core of Python 3.

[Maye2009] Mayer, Richard E. Multimedia Learning. Second. Cambridge University Press, 2009. Presents a cognitive theory of multimedia learning.

[Maye2004] ———. “Teaching of Subject Matter.” Annual Review of Psychology 55, no. 1 (February 2004): 715–44. doi:10.1146/annurev.psych.55.082602.133124. An overview of how and why teaching and learning are subject-specific.

[Maye2003] Mayer, Richard E., and Roxana Moreno. “Nine Ways to Reduce Cognitive Load in Multimedia Learning.” Educational Psychologist 38, no. 1 (March 2003): 43–52. doi:10.1207/s15326985ep3801_6. Shows how research into how we absorb and process information can be applied to the design of instructional materials.

[Mazu1996] Mazur, Eric. Peer Instruction: A User’s Manual. Prentice-Hall, 1996. A guide to implementing peer instruction.

[McCa2008] McCauley, Renée, Sue Fitzgerald, Gary Lewandowski, Laurie Murphy, Beth Simon, Lynda Thomas, and Carol Zander. “Debugging: A Review of the Literature from an Educational Perspective.” Computer Science Education 18, no. 2 (June 2008): 67–92. doi:10.1080/08993400802114581. Summarizes research about why bugs occur, why types there are, how people debug, and whether we can teach debugging skills.

[McCr2001] McCracken, Michael, Tadeusz Wilusz, Vicki Almstrum, Danny Diaz, Mark Guzdial, Dianne Hagan, Yifat Ben-David Kolikant, Cary Laxer, Lynda Thomas, and Ian Utting. “A Multi-National, Multi-Institutional Study of Assessment of Programming Skills of First-Year CS Students.” In 2001 Conference on Innovation and Technology in Computer Science Education (ITiCSE’01). Association for Computing Machinery (ACM), 2001. doi:10.1145/572133.572137. Reports that most students still struggle to solve even basic programming problems at the end of their introductory course.

[McDo2006] McDowell, Charlie, Linda Werner, Heather E. Bullock, and Julian Fernald. “Pair Programming Improves Student Retention, Confidence, and Program Quality.” Communications of the ACM 49, no. 8 (August 2006): 90–95. doi:10.1145/1145287.1145293. A summary of research showing that pair programming improves retention and confidence.

[McGu2015] McGuire, Saundra Yancey. Teach Students How to Learn: Strategies You Can Incorporate into Any Course to Improve Student Metacognition, Study Skills, and Motivation. Stylus Publishing, 2015. Explains how metacognitive strategies can improve learning.

[McMi2017] McMillan Cottom, Tressie. Lower Ed: The Troubling Rise of for-Profit Colleges in the New Economy. The New Press, 2017. Lays bare the dynamics of the growing educational industry to show how it leads to greater inequality rather than less.

[McTi2013] McTighe, Jay, and Grant Wiggins. “Understanding by Design Framework.” http://www.ascd.org/ASCD/pdf/siteASCD/publications/UbD_WhitePaper0312.pdf; Association for Supervision & Curriculum Development (ASCD), 2013. Summarizes the backward instructional design process.

[Metc2016] Metcalfe, Janet. “Learning from Errors.” Annual Review of Psychology 68, no. 1 (January 2016): 465–89. doi:10.1146/annurev-psych-010416-044022. Summarizes work on the hypercorrection effect in learning.

[Meys2018] Meysenburg, Mark, Tessa Durham Brooks, Raychelle Burks, Erin Doyle, and Timothy Frey. “DIVAS: Outreach to the Natural Sciences Through Image Processing.” In 2018 Technical Symposium on Computer Science Education (SIGCSE’18). Association for Computing Machinery (ACM), 2018. doi:10.1145/3159450.3159537. Describes early results from a programming course for science undergrads built around image processing.

[Midw2010] Midwest Academy. Organizing for Social Change: Midwest Academy Manual for Activists. Fourth. The Forum Press, 2010. A training manual for people building progressive social movements.

[Mill2016b] Miller, Craig S., and Amber Settle. “Some Trouble with Transparency: An Analysis of Student Errors with Object-Oriented Python.” In 2016 International Computing Education Research Conference (ICER’16). Association for Computing Machinery (ACM), 2016. doi:10.1145/2960310.2960327. Reports that students have difficulty with self in Python.

[Mill2015] Miller, David I., and Jonathan Wai. “The Bachelor’s to Ph.d. STEM Pipeline No Longer Leaks More Women Than Men: A 30-Year Analysis.” Frontiers in Psychology 6 (February 2015). doi:10.3389/fpsyg.2015.00037. Shows that the “leaky pipeline” metaphor stopped being accurate some time in the 1990s.

[Mill1956] Miller, George A. “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information.” Psychological Review 63, no. 2 (1956): 81–97. doi:10.1037/h0043158. The original paper on the limited size of short-term memory.

[Mill2013] Miller, Kelly, Nathaniel Lasry, Kelvin Chu, and Eric Mazur. “Role of Physics Lecture Demonstrations in Conceptual Learning.” Physical Review Special Topics - Physics Education Research 9, no. 2 (September 2013). doi:10.1103/physrevstper.9.020113. Reports a detailed study of what students learn during demonstrations and why.

[Mill2016a] Miller, Michelle D. Minds Online: Teaching Effectively with Technology. Harvard University Press, 2016. Describes ways that insights from neuroscience can be used to improve online teaching.

[Milt2018] Miltner, Kate M. “Girls Who Coded: Gender in Twentieth Century U.K. And U.S. Computing.” Science, Technology, & Human Values, May 2018. doi:10.1177/0162243918770287. A review of three books about how women were systematically pushed out of computing.

[Mina1986] Minahan, Anne. “Martha’s Rules.” Affilia 1, no. 2 (June 1986): 53–56. doi:10.1177/088610998600100206. Describes a lightweight set of rules for consensus-based decision making.

[Miya2018] Miyatsu, Toshiya, Khuyen Nguyen, and Mark A. McDaniel. “Five Popular Study Strategies: Their Pitfalls and Optimal Implementations.” Perspectives on Psychological Science 13, no. 3 (May 2018): 390–407. doi:10.1177/1745691617710510. Explains how learners mis-use common study strategies and what they should do instead.

[Mlad2017] Mladenović, Monika, Ivica Boljat, and Žana Žanko. “Comparing Loops Misconceptions in Block-Based and Text-Based Programming Languages at the K-12 Level.” Education and Information Technologies, November 2017. doi:10.1007/s10639-017-9673-3. Reports that K-12 students have fewer misconceptions about loops using Scratch than using Logo or Python, and fewer misconceptions about nested loops with Logo than with Python.

[More2019] Morehead, Kayla, John Dunlosky, and Katherine A. Rawson. “How Much Mightier Is the Pen Than the Keyboard for Note-Taking? A Replication and Extension of Mueller and Oppenheimer (2014).” Educational Psychology Review, February 2019. doi:10.1007/s10648-019-09468-2. Reports a failure to replicate an earlier study comparing note-taking by hand and with computers.

[Morr2016] Morrison, Briana B., Lauren E. Margulieux, Barbara J. Ericson, and Mark Guzdial. “Subgoals Help Students Solve Parsons Problems.” In 2016 Technical Symposium on Computer Science Education (SIGCSE’16). Association for Computing Machinery (ACM), 2016. doi:10.1145/2839509.2844617. Reports that students using labelled subgoals solve Parsons Problems better than students without labelled subgoals.

[Muel2014] Mueller, Pam A., and Daniel M. Oppenheimer. “The Pen Is Mightier Than the Keyboard.” Psychological Science 25, no. 6 (April 2014): 1159–68. doi:10.1177/0956797614524581. Presents evidence that taking notes by hand is more effective than taking notes on a laptop.

[Mull2007a] Muller, Derek A., James Bewes, Manjula D. Sharma, and Peter Reimann. “Saying the Wrong Thing: Improving Learning with Multimedia by Including Misconceptions.” Journal of Computer Assisted Learning 24, no. 2 (July 2007): 144–55. doi:10.1111/j.1365-2729.2007.00248.x. Reports that including explicit discussion of misconceptions significantly improves learning outcomes: students with low prior knowledge benefit most and students with more prior knowledge are not disadvantaged.

[Mull2007b] Muller, Orna, David Ginat, and Bruria Haberman. “Pattern-Oriented Instruction and Its Influence on Problem Decomposition and Solution Construction.” In 2007 Technical Symposium on Computer Science Education (SIGCSE’07). Association for Computing Machinery (ACM), 2007. doi:10.1145/1268784.1268830. Reports that explicitly teaching solution patterns improves learning outcomes.

[Murp2008] Murphy, Laurie, Gary Lewandowski, Renée McCauley, Beth Simon, Lynda Thomas, and Carol Zander. “Debugging: The Good, the Bad, and the Quirky - a Qualitative Analysis of Novices’ Strategies.” ACM SIGCSE Bulletin 40, no. 1 (February 2008): 163. doi:10.1145/1352322.1352191. Reports that many CS1 students use good debugging strategies, but many others don’t, and students often don’t recognize when they are stuck.

[Nara2018] Narayanan, Sathya, Kathryn Cunningham, Sonia Arteaga, William J. Welch, Leslie Maxwell, Zechariah Chawinga, and Bude Su. “Upward Mobility for Underrepresented Students.” In 2018 Technical Symposium on Computer Science Education (SIGCSE’18). Association for Computing Machinery (ACM), 2018. doi:10.1145/3159450.3159551. Describes an intensive 3-year bachelor’s program based on tight-knit cohorts and administrative support that tripled graduation rates.

[Nath2003] Nathan, Mitchell J., and Anthony Petrosino. “Expert Blind Spot Among Preservice Teachers.” American Educational Research Journal 40, no. 4 (January 2003): 905–28. doi:10.3102/00028312040004905. Early work on expert blind spot.

[Hpl2018] National Academies of Sciences, Engineering, and Medicine. How People Learn Ii: Learners, Contexts, and Cultures. National Academies Press, 2018. A comprehensive survey of what we know about learning.

[Nils2017] Nilson, Linda B., and Ludwika A. Goodson. Online Teaching at Its Best: Merging Instructional Design with Teaching and Learning Research. Jossey-Bass, 2017. A guide for college instructors that focuses on online teaching.

[Nord2017] Nordmann, Emily, Colin Calder, Paul Bishop, Amy Irwin, and Darren Comber. “Turn up, Tune in, Don’T Drop Out: The Relationship Between Lecture Attendance, Use of Lecture Recordings, and Achievement at Different Levels of Study.” https://psyarxiv.com/fd3yj, 2017. doi:10.17605/OSF.IO/FD3YJ. Reports on the pros and cons of recording lectures.

[Nutb2016] Nutbrown, Stephen, and Colin Higgins. “Static Analysis of Programming Exercises: Fairness, Usefulness and a Method for Application.” Computer Science Education 26, nos. 2-3 (May 2016): 104–28. doi:10.1080/08993408.2016.1179865. Describes ways auto-grader rules were modified and grades weighted to improve correlation between automatic feedback and manual grades.

[Nuth2007] Nuthall, Graham. The Hidden Lives of Learners. NZCER Press, 2007. Summarizes a lifetime of work looking at what students actually do in classrooms and how they actually learn.

[Ojos2015] Ojose, Bobby. Common Misconceptions in Mathematics: Strategies to Correct Them. UPA, 2015. A catalog of K-12 misconceptions in mathematics and what to do about them.

[Ornd2015] Orndorff III, Harold N. “Collaborative Note-Taking: The Impact of Cloud Computing on Classroom Performance.” International Journal of Teaching and Learning in Higher Education 27, no. 3 (2015): 340–51. Reports that taking notes together online is more effective than solo note-taking.

[Ostr2015] Ostrom, Elinor. Governing the Commons: The Evolution of Institutions for Collective Action. Cambridge University Press, 2015. A masterful description and analysis of cooperative governance.

[Pape1993] Papert, Seymour A. Mindstorms: Children, Computers, and Powerful Ideas. Second. Basic Books, 1993. The foundational text on how computers can underpin a new kind of education.

[Pare2008] Paré, Dwayne E., and Steve Joordens. “Peering into Large Lectures: Examining Peer and Expert Mark Agreement Using peerScholar, an Online Peer Assessment Tool.” Journal of Computer Assisted Learning 24, no. 6 (October 2008): 526–40. doi:10.1111/j.1365-2729.2008.00290.x. Shows that peer grading by small groups can be as effective as expert grading once accountability features are introduced.

[Park2015] Park, Thomas H., Brian Dorn, and Andrea Forte. “An Analysis of HTML and CSS Syntax Errors in a Web Development Course.” ACM Transactions on Computing Education 15, no. 1 (March 2015): 1–21. doi:10.1145/2700514. Describes the errors students make in an introductory course on HTML and CSS.

[Park2016] Parker, Miranda C., Mark Guzdial, and Shelly Engleman. “Replication, Validation, and Use of a Language Independent CS1 Knowledge Assessment.” In 2016 International Computing Education Research Conference (ICER’16). Association for Computing Machinery (ACM), 2016. doi:10.1145/2960310.2960316. Describes construction and replication of a second concept inventory for basic computing knowledge.

[Parn1986] Parnas, David Lorge, and Paul C. Clements. “A Rational Design Process: How and Why to Fake It.” IEEE Transactions on Software Engineering SE-12, no. 2 (February 1986): 251–57. doi:10.1109/tse.1986.6312940. Argues that using a rational design process is less important than looking as though you had.

[Parn2017] Parnin, Chris, Janet Siegmund, and Norman Peitek. “On the Nature of Programmer Expertise.” In Psychology of Programming Interest Group Workshop 2017, 2017. An annotated exploration of what “expertise” means in programming.

[Pars2006] Parsons, Dale, and Patricia Haden. “Parson’s Programming Puzzles: A Fun and Effective Learning Tool for First Programming Courses.” In 2006 Australasian Conference on Computing Education (ACE’06), 157–63. Australian Computer Society, 2006. The first description of Parson’s Problems.

[Part2011] Partanen, Anu. “What Americans Keep Ignoring About Finland’s School Success.” https://www.theatlantic.com/national/archive/2011/12/what-americans-keep-ignoring-about-finlands-school-success/250564/, 2011. Explains that other countries struggle to replicate the success of Finland’s schools because they’re unwilling to tackle larger social factors.

[Pati2016] Patitsas, Elizabeth, Jesse Berlin, Michelle Craig, and Steve Easterbrook. “Evidence That Computer Science Grades Are Not Bimodal.” In 2016 International Computing Education Research Conference (ICER’16). Association for Computing Machinery (ACM), 2016. doi:10.1145/2960310.2960312. Presents a statistical analysis and an experiment which jointly show that grades in computing classes are not bimodal.

[Pea1986] Pea, Roy D. “Language-Independent Conceptual ‘Bugs’ in Novice Programming.” Journal of Educational Computing Research 2, no. 1 (February 1986): 25–36. doi:10.2190/689t-1r2a-x4w4-29j2. First named the "superbug" in coding: most newcomers think the computer understands what they want, in the same way that a human being would.

[Petr2016] Petre, Marian, and André van der Hoek. Software Design Decoded: 66 Ways Experts Think. MIT Press, 2016. A short illustrated overview of how expert software developers think.

[Pign2016] Pigni, Alessandra. The Idealist’s Survival Kit: 75 Simple Ways to Prevent Burnout. Parallax Press, 2016. A guide to staying sane and healthy while doing good.

[Port2016] Porter, Leo, Dennis Bouvier, Quintin Cutts, Scott Grissom, Cynthia Bailey Lee, Robert McCartney, Daniel Zingaro, and Beth Simon. “A Multi-Institutional Study of Peer Instruction in Introductory Computing.” In 2016 Technical Symposium on Computer Science Education (SIGCSE’16). Association for Computing Machinery (ACM), 2016. doi:10.1145/2839509.2844642. Reports that students in introductory programming classes value peer instruction, and that it improves learning outcomes.

[Port2013] Porter, Leo, Mark Guzdial, Charlie McDowell, and Beth Simon. “Success in Introductory Programming: What Works?” Communications of the ACM 56, no. 8 (August 2013): 34. doi:10.1145/2492007.2492020. Summarizes the evidence that peer instruction, media computation, and pair programming can significantly improve outcomes in introductory programming courses.

[Qian2017] Qian, Yizhou, and James Lehman. “Students’ Misconceptions and Other Difficulties in Introductory Programming.” ACM Transactions on Computing Education 18, no. 1 (October 2017): 1–24. doi:10.1145/3077618. Summarizes research on student misconceptions about computing.

[Rago2017] Ragonis, Noa, and Ronit Shmallo. “On the (Mis)understanding of the this Reference.” In 2017 Technical Symposium on Computer Science Education (SIGCSE’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3017680.3017715. Reports that most students do not understood when to use this, and that teachers are also often not clear on the subject.

[Raj2018] Raj, Adalbert Gerald Soosai, Jignesh M. Patel, Richard Halverson, and Erica Rosenfeld Halverson. “Role of Live-Coding in Learning Introductory Programming.” In 2018 Koli Calling International Conference on Computing Education Research (Koli’18), 2018. doi:10.1145/3279720.3279725. A grounded theory analysis of live coding that includes references to previous works.

[Rams2019] Ramsay, G., A. B. Haynes, S. R. Lipsitz, I. Solsky, J. Leitch, A. A. Gawande, and M. Kumar. “Reducing Surgical Mortality in Scotland by Use of the WHO Surgical Safety Checklist.” BJS, April 2019. doi:10.1002/bjs.11151. Found that the introduction of surgical checklists in Scottish hospitals significantly reduced mortality rates.

[Raws2014] Rawson, Katherine A., Ruthann C. Thomas, and Larry L. Jacoby. “The Power of Examples: Illustrative Examples Enhance Conceptual Learning of Declarative Concepts.” Educational Psychology Review 27, no. 3 (June 2014): 483–504. doi:10.1007/s10648-014-9273-3. Reports that presenting examples helps students understand definitions, so long as examples and definitions are interleaved.

[Ray2014] Ray, Eric J., and Deborah S. Ray. Unix and Linux: Visual Quickstart Guide. Fifth. Peachpit Press, 2014. An introduction to Unix that is both a good tutorial and a good reference guide.

[Rice2018] Rice, Gail Taylor. Hitting Pause: 65 Lecture Breaks to Refresh and Reinforce Learning. Stylus Publishing, 2018. Justifies and catalogs ways to take a pause in class to help learning.

[Rich2017] Rich, Kathryn M., Carla Strickland, T. Andrew Binkowski, Cheryl Moran, and Diana Franklin. “K-8 Learning Trajectories Derived from Research Literature.” In 2017 International Computing Education Research Conference (ICER’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3105726.3106166. Presents learning trajectories for K-8 computing classes for Sequence, Repetition, and Conditions gleaned from the literature.

[Ritz2018] Ritz, Anna. “Programming the Central Dogma: An Integrated Unit on Computer Science and Molecular Biology Concepts.” In 2018 Technical Symposium on Computer Science Education (SIGCSE’18). Association for Computing Machinery (ACM), 2018. doi:10.1145/3159450.3159590. Describes an introductory computing course for biologists whose problems are drawn from the DNA-to-protein processes in cells.

[Robe2017] Roberts, Eric. “Assessing and Responding to the Growth of Computer Science Undergraduate Enrollments: Annotated Findings.” http://cs.stanford.edu/people/eroberts/ResourcesForTheCSCapacityCrisis/files/AnnotatedFindings.pptx, 2017. Summarizes findings from a National Academies study about computer science enrollments.

[Robi2005] Robinson, Evan. “Why Crunch Mode Doesn’t Work: 6 Lessons.” http://www.igda.org/articles/erobinson_crunch.php; International Game Developers Association (IGDA), 2005. Summarizes research on the effects of overwork and sleep deprivation.

[Roge2018] Rogelberg, Steven G. The Surprising Science of Meetings. Oxford University Press, 2018. A short summary of research on effective meetings.

[Rohr2015] Rohrer, Doug, Robert F. Dedrick, and Sandra Stershic. “Interleaved Practice Improves Mathematics Learning.” Journal of Educational Psychology 107, no. 3 (2015): 900–908. doi:10.1037/edu0000001. Reports that interleaved practice is more effective than monotonous practice when learning.

[Rubi2013] Rubin, Marc J. “The Effectiveness of Live-Coding to Teach Introductory Programming.” In 2013 Technical Symposium on Computer Science Education (SIGCSE’13), 651–56. Association for Computing Machinery (ACM), 2013. doi:10.1145/2445196.2445388. Reports that live coding is as good as or better than using static code examples.

[Rubi2014] Rubio-Sánchez, Manuel, Päivi Kinnunen, Cristóbal Pareja-Flores, and J. Ángel Velázquez-Iturbide. “Student Perception and Usage of an Automated Programming Assessment Tool.” Computers in Human Behavior 31 (February 2014): 453–60. doi:10.1016/j.chb.2013.04.001. Describes use of an auto-grader for student assignments.

[Sahl2015] Sahlberg, Pasi. Finnish Lessons 2.0: What Can the World Learn from Educational Change in Finland? Teachers College Press, 2015. A frank look at the success of Finland’s educational system and why other countries struggle to replicate it.

[Saja2006] Sajaniemi, Jorma, Mordechai Ben-Ari, Pauli Byckling, Petri Gerdt, and Yevgeniya Kulikova. “Roles of Variables in Three Programming Paradigms.” Computer Science Education 16, no. 4 (December 2006): 261–79. doi:10.1080/08993400600874584. A detailed look at the authors’ work on roles of variables.

[Sala2017] Sala, Giovanni, and Fernand Gobet. “Does Far Transfer Exist? Negative Evidence from Chess, Music, and Working Memory Training.” Current Directions in Psychological Science 26, no. 6 (October 2017): 515–20. doi:10.1177/0963721417712760. A meta-analysis showing that far transfer rarely occurs.

[Sand2013] Sanders, Kate, Jaime Spacco, Marzieh Ahmadzadeh, Tony Clear, Stephen H. Edwards, Mikey Goldweber, Chris Johnson, Raymond Lister, Robert McCartney, and Elizabeth Patitsas. “The Canterbury QuestionBank: Building a Repository of Multiple-Choice CS1 and CS2 Questions.” In 2013 Conference on Innovation and Technology in Computer Science Education (ITiCSE’13). Association for Computing Machinery (ACM), 2013. doi:10.1145/2543882.2543885. Describes development of a shared question bank for introductory CS, and patterns for multiple choice questions that emerged from entries.

[Scan1989] Scanlan, David A. “Structured Flowcharts Outperform Pseudocode: An Experimental Comparison.” IEEE Software 6, no. 5 (September 1989): 28–36. doi:10.1109/52.35587. Reports that students understand flowcharts better than pseudocode if both are equally well structured.

[Scho1984] Schön, Donald A. The Reflective Practitioner: How Professionals Think in Action. Basic Books, 1984. A groundbreaking look at how professionals in different fields actually solve problems.

[Schw2013] Schwarz, Viviane. Welcome to Your Awesome Robot. Flying Eye Books, 2013. A wonderful illustrated guide to building wearable cardboard robot suits. Not just for kids.

[Scot1998] Scott, James C. Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed. Yale University Press, 1998. Argues that large organizations consistently prefer uniformity over productivity.

[Sent2018] Sentance, Sue, Erik Barendsen, and Carsten Schulte, eds. Computer Science Education: Perspectives on Teaching and Learning in School. Bloomsbury Press, 2018. A collection of academic survey articles on teaching computing.

[Sent2019] Sentance, Sue, Jane Waite, and Maria Kallia. “Teachers’ Experiences of Using PRIMM to Teach Programming in School.” In 2019 Technical Symposium on Computer Science Education (SIGCSE’19). ACM Press, 2019. doi:10.1145/3287324.3287477. Describes PRIMM and its effectiveness.

[Sepp2015] Seppälä, Otto, Petri Ihantola, Essi Isohanni, Juha Sorva, and Arto Vihavainen. “Do We Know How Difficult the Rainfall Problem Is?” In 2015 Koli Calling Conference on Computing Education Research (Koli’15). ACM Press, 2015. doi:10.1145/2828959.2828963. A meta-study of the Rainfall Problem.

[Shap2007] Shapiro, Jenessa R., and Steven L. Neuberg. “From Stereotype Threat to Stereotype Threats: Implications of a Multi-Threat Framework for Causes, Moderators, Mediators, Consequences, and Interventions.” Personality and Social Psychology Review 11, no. 2 (May 2007): 107–30. doi:10.1177/1088868306294790. Explores the ways the term “stereotype threat” has been used.

[Shel2017] Shell, Duane F., Leen-Kiat Soh, Abraham E. Flanigan, Markeya S. Peteranetz, and Elizabeth Ingraham. “Improving Students’ Learning and Achievement in CS Classrooms Through Computational Creativity Exercises That Integrate Computational and Creative Thinking.” In 2017 Technical Symposium on Computer Science Education (SIGCSE’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3017680.3017718. Reports that having students work in small groups on computational creativity exercises improves learning outcomes.

[Shol2019] Sholler, Dan, Igor Steinmacher, Denae Ford, Mara Averick, Mike Hoye, and Greg Wilson. “Ten Simple Rules for Helping Newcomers Become Contributors to Open Source Projects.” https://github.com/gvwilson/10-newcomers/, 2019. Evidence-based practices for helping newcomers become productive in open projects.

[Simo2013] Simon. “Soloway’s Rainfall Problem Has Become Harder.” In 2013 Conference on Learning and Teaching in Computing and Engineering. Institute of Electrical; Electronics Engineers (IEEE), 2013. doi:10.1109/latice.2013.44. Argues that the Rainfall Problem is harder for novices than it used to be because they’re not used to handling keyboard input, so direct comparison with past results may be unfair.

[Sing2012] Singh, Vandana. “Newcomer Integration and Learning in Technical Support Communities for Open Source Software.” In 2012 ACM International Conference on Supporting Group Work - GROUP’12. ACM Press, 2012. doi:10.1145/2389176.2389186. An early study of onboarding in open source.

[Sirk2012] Sirkiä, Teemu, and Juha Sorva. “Exploring Programming Misconceptions: An Analysis of Student Mistakes in Visual Program Simulation Exercises.” In 2012 Koli Calling Conference on Computing Education Research (Koli’12). Association for Computing Machinery (ACM), 2012. doi:10.1145/2401796.2401799. Analyzes data from student use of an execution visualization tool and classifies common mistakes.

[Sisk2018] Sisk, Victoria F., Alexander P. Burgoyne, Jingze Sun, Jennifer L. Butler, and Brooke N. Macnamara. “To What Extent and Under Which Circumstances Are Growth Mind-Sets Important to Academic Achievement? Two Meta-Analyses.” Psychological Science, March 2018, 095679761773970. doi:10.1177/0956797617739704. Reports meta-analyses of the relationship between mind-set and academic achievement, and the effectiveness of mind-set interventions on academic achievement, and finds that overall effects are weak for both, but some results support specific tenets of the theory.

[Skud2014] Skudder, Ben, and Andrew Luxton-Reilly. “Worked Examples in Computer Science.” In 2014 Australasian Computing Education Conference, (ACE’14), 2014. A summary of research on worked examples as applied to computing education.

[Smar2018] Smarr, Benjamin L., and Aaron E. Schirmer. “3.4 Million Real-World Learning Management System Logins Reveal the Majority of Students Experience Social Jet Lag Correlated with Decreased Performance.” Scientific Reports 8, no. 1 (March 2018). doi:10.1038/s41598-018-23044-8. Reports that students who have to work outside their natural body clock cycle do less well.

[Smit2009] Smith, Michelle K., William B. Wood, Wendy K. Adams, Carl E. Wieman, Jennifer K. Knight, N. Guild, and T. T. Su. “Why Peer Discussion Improves Student Performance on in-Class Concept Questions.” Science 323, no. 5910 (January 2009): 122–24. doi:10.1126/science.1165919. Reports that student understanding increases during discussion in peer instruction, even when none of the students in the group initially know the right answer.

[Solo1986] Soloway, Elliot. “Learning to Program = Learning to Construct Mechanisms and Explanations.” Communications of the ACM 29, no. 9 (September 1986): 850–58. doi:10.1145/6592.6594. Analyzes programming in terms of choosing appropriate goals and constructing plans to achieve them, and introduces the Rainfall Problem.

[Solo1984] Soloway, Elliot, and Kate Ehrlich. “Empirical Studies of Programming Knowledge.” IEEE Transactions on Software Engineering SE-10, no. 5 (September 1984): 595–609. doi:10.1109/tse.1984.5010283. Proposes that experts have programming plans and rules of programming discourse.

[Sorv2018] Sorva, Juha. “Misconceptions and the Beginner Programmer.” In Computer Science Education: Perspectives on Teaching and Learning in School, edited by Sue Sentance, Erik Barendsen, and Carsten Schulte. Bloomsbury Press, 2018. Summarizes what we know about what novices misunderstand about computing.

[Sorv2013] ———. “Notional Machines and Introductory Programming Education.” ACM Transactions on Computing Education 13, no. 2 (June 2013): 1–31. doi:10.1145/2483710.2483713. Reviews literature on programming misconceptions, and argues that instructors should address notional machines as an explicit learning objective.

[Sorv2014] Sorva, Juha, and Otto Seppälä. “Research-Based Design of the First Weeks of CS1.” In 2014 Koli Calling Conference on Computing Education Research (Koli’14). Association for Computing Machinery (ACM), 2014. doi:10.1145/2674683.2674690. Proposes three cognitively plausible frameworks for the design of a first CS course.

[Spal2014] Spalding, Dan. How to Teach Adults: Plan Your Class, Teach Your Students, Change the World. Jossey-Bass, 2014. A short guide to teaching adult free-range learners informed by the author’s social activism.

[Spoh1985] Spohrer, James C., Elliot Soloway, and Edgar Pope. “A Goal/Plan Analysis of Buggy Pascal Programs.” Human-Computer Interaction 1, no. 2 (June 1985): 163–207. doi:10.1207/s15327051hci0102_4. One of the first cognitively plausible analyses of how people program, which proposes a goal/plan model.

[Srid2016] Sridhara, Sumukh, Brian Hou, Jeffrey Lu, and John DeNero. “Fuzz Testing Projects in Massive Courses.” In 2016 Conference on Learning @ Scale (L@S’16). Association for Computing Machinery (ACM), 2016. doi:10.1145/2876034.2876050. Reports that fuzz testing student code catches errors that are missed by handwritten test suite, and explains how to safely share tests and results.

[Stam2013] Stampfer, Eliane, and Kenneth R. Koedinger. “When Seeing Isn’t Believing: Influences of Prior Conceptions and Misconceptions.” In 2013 Annual Meeting of the Cognitive Science Society (CogSci’13), 2013. Explores why giving children more information when they are learning about fractions can lower their performance.

[Stam2014] Stampfer Wiese, Eliane, and Kenneth R. Koedinger. “Investigating Scaffolds for Sense Making in Fraction Addition and Comparison.” In 2014 Annual Conference of the Cognitive Science Society (CogSci’14), 2014. Looks at how to scaffold learning of fraction operations.

[Star2014] Stark, Philip, and Richard Freishtat. “An Evaluation of Course Evaluations.” ScienceOpen Research, September 2014. doi:10.14293/s2199-1006.1.sor-edu.aofrqa.v1. Yet another demonstration that teaching evaluations don’t correlate with learning outcomes, and that they are frequently statistically suspect.

[Stas1998] Stasko, John, John Domingue, Mark H. Brown, and Blaine A. Price, eds. Software Visualization: Programming as a Multimedia Experience. MIT Press, 1998. A survey of program and algorithm visualization techniques and results.

[Stee2011] Steele, Claude M. Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do. W. W. Norton & Company, 2011. Explains and explores stereotype threat and strategies for addressing it.

[Stef2017] Stefik, Andreas, Patrick Daleiden, Diana Franklin, Stefan Hanenberg, Antti-Juhani Kaijanaho, Walter Tichy, and Brett A. Becker. “Programming Languages and Learning.” https://quorumlanguage.com/evidence.html, 2017. Summarizes what we actually know about designing programming languages and why we believe it’s true.

[Stef2013] Stefik, Andreas, and Susanna Siebert. “An Empirical Investigation into Programming Language Syntax.” ACM Transactions on Computing Education 13, no. 4 (November 2013): 1–40. doi:10.1145/2534973. Reports that curly-brace languages are as hard to learn as a language with randomly-designed syntax, but others are easier.

[Steg2016a] Stegeman, Martijn, Erik Barendsen, and Sjaak Smetsers. “Designing a Rubric for Feedback on Code Quality in Programming Courses.” In 2016 Koli Calling Conference on Computing Education Research (Koli’16). Association for Computing Machinery (ACM), 2016. doi:10.1145/2999541.2999555. Describes several iterations of a code quality rubric for novice programming courses.

[Steg2016b] ———. “Rubric for Feedback on Code Quality in Programming Courses.” http://stgm.nl/quality, 2016. Presents a code quality rubric for novice programming.

[Steg2014] ———. “Towards an Empirically Validated Model for Assessment of Code Quality.” In 2014 Koli Calling Conference on Computing Education Research (Koli’14). Association for Computing Machinery (ACM), 2014. doi:10.1145/2674683.2674702. Presents a code quality rubric for novice programming courses.

[Stei2016] Steinmacher, Igor, Tayana Uchoa Conte, Christoph Treude, and Marco Aurélio Gerosa. “Overcoming Open Source Project Entry Barriers with a Portal for Newcomers.” In 2016 International Conference on Software Engineering (ICSE’16). ACM Press, 2016. doi:10.1145/2884781.2884806. Reports the effectiveness of a portal specifically designed to help newcomers.

[Stei2018] Steinmacher, Igor, Gustavo Pinto, Igor Scaliante Wiese, and Marco Aurélio Gerosa. “Almost There: A Study on Quasi-Contributors in Open-Source Software Projects.” In 2018 International Conference on Software Engineering (ICSE’18). ACM Press, 2018. doi:10.1145/3180155.3180208. Look at why external developers fail to get their contributions accepted into open source projects.

[Stei2013] Steinmacher, Igor, Igor Wiese, Ana Paula Chaves, and Marco Aurelio Gérosa. “Why Do Newcomers Abandon Open Source Software Projects?” In 2013 International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE’13). Institute of Electrical; Electronics Engineers (IEEE), 2013. doi:10.1109/chase.2013.6614728. Explores why new members don’t stay in open source projects.

[Stoc2018] Stockard, Jean, Timothy W. Wood, Cristy Coughlin, and Caitlin Rasplica Khoury. “The Effectiveness of Direct Instruction Curricula: A Meta-Analysis of a Half Century of Research.” Review of Educational Research, January 2018, 003465431775191. doi:10.3102/0034654317751919. A meta-analysis that finds significant positive benefit for Direct Instruction.

[Sung2012] Sung, Eunmo, and Richard E. Mayer. “When Graphics Improve Liking but Not Learning from Online Lessons.” Computers in Human Behavior 28, no. 5 (September 2012): 1618–25. doi:10.1016/j.chb.2012.03.026. Reports that students who receive any kind of graphics give significantly higher satisfaction ratings than those who don’t, but only students who get instructive graphics perform better than groups that get no graphics, seductive graphics, or decorative graphics.

[Sved2016] Svedin, Maria, and Olle Bälter. “Gender Neutrality Improved Completion Rate for All.” Computer Science Education 26, nos. 2-3 (July 2016): 192–207. doi:10.1080/08993408.2016.1231469. Reports that redesigning an online course to be gender neutral improves completion probability in general, but decreases it for students with a superficial approach to learning.

[Sond2012] Søndergaard, Harald, and Raoul A. Mulder. “Collaborative Learning Through Formative Peer Review: Pedagogy, Programs and Potential.” Computer Science Education 22, no. 4 (December 2012): 343–67. doi:10.1080/08993408.2012.728041. Surveys literature on student peer assessment, distinguishing grading and reviewing as separate forms, and summarizes features a good peer review system needs to have.

[Tedr2008] Tedre, Matti, and Erkki Sutinen. “Three Traditions of Computing: What Educators Should Know.” Computer Science Education 18, no. 3 (September 2008): 153–70. doi:10.1080/08993400802332332. Summarizes the history and views of three traditions in computing: mathematical, scientific, and engineering.

[Tew2011] Tew, Allison Elliott, and Mark Guzdial. “The FCS1: A Language Independent Assessment of CS1 Knowledge.” In 2011 Technical Symposium on Computer Science Education (SIGCSE’11). Association for Computing Machinery (ACM), 2011. doi:10.1145/1953163.1953200. Describes development and validation of a language-independent assessment instrument for CS1 knowledge.

[Thay2017] Thayer, Kyle, and Amy J. Ko. “Barriers Faced by Coding Bootcamp Students.” In 2017 International Computing Education Research Conference (ICER’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3105726.3106176. Reports that coding bootcamps are sometimes useful, but quality is varied, and formal and informal barriers to employment remain.

[Ubel2017] Ubell, Robert. “How the Pioneers of the MOOC Got It Wrong.” http://spectrum.ieee.org/tech-talk/at-work/education/how-the-pioneers-of-the-mooc-got-it-wrong, 2017. A brief exploration of why MOOCs haven’t lived up to initial hype.

[Urba2014] Urbach, David R., Anand Govindarajan, Refik Saskin, Andrew S. Wilton, and Nancy N. Baxter. “Introduction of Surgical Safety Checklists in Ontario, Canada.” New England Journal of Medicine 370, no. 11 (March 2014): 1029–38. doi:10.1056/nejmsa1308261. Reports a study showing that the introduction of surgical checklists did not have a significant effect on operative outcomes.

[Utti2013] Utting, Ian, Juha Sorva, Tadeusz Wilusz, Allison Elliott Tew, Michael McCracken, Lynda Thomas, Dennis Bouvier, et al. “A Fresh Look at Novice Programmers’ Performance and Their Teachers’ Expectations.” In 2013 Conference on Innovation and Technology in Computer Science Education (ITiCSE’13). ACM Press, 2013. doi:10.1145/2543882.2543884. Replicates an earlier study showing how little students learn in their first programming course.

[Uttl2017] Uttl, Bob, Carmela A. White, and Daniela Wong Gonzalez. “Meta-Analysis of Faculty’s Teaching Effectiveness: Student Evaluation of Teaching Ratings and Student Learning Are Not Related.” Studies in Educational Evaluation 54 (September 2017): 22–42. doi:10.1016/j.stueduc.2016.08.007. Summarizes studies showing that how students rate a course and how much they actually learn are not related.

[Varm2015] Varma, Roli, and Deepak Kapur. “Decoding Femininity in Computer Science in India.” Communications of the ACM 58, no. 5 (April 2015): 56–62. doi:10.1145/2663339. Reports female participation in computing in India.

[Vell2017] Vellukunnel, Mickey, Philip Buffum, Kristy Elizabeth Boyer, Jeffrey Forbes, Sarah Heckman, and Ketan Mayer-Patel. “Deconstructing the Discussion Forum: Student Questions and Computer Science Learning.” In 2017 Technical Symposium on Computer Science Education (SIGCSE’17). Association for Computing Machinery (ACM), 2017. doi:10.1145/3017680.3017745. Found that students mostly ask constructivist and logistical questions in forums, and that the former correlate with grades.

[Viha2014] Vihavainen, Arto, Jonne Airaksinen, and Christopher Watson. “A Systematic Review of Approaches for Teaching Introductory Programming and Their Influence on Success.” In 2014 International Computing Education Research Conference (ICER’14). Association for Computing Machinery (ACM), 2014. doi:10.1145/2632320.2632349. Consolidates studies of CS1-level teaching changes and finds media computation the most effective, while introducing a game theme is the least effective.

[Wall2009] Walle, Thorbjorn, and Jo Erskine Hannay. “Personality and the Nature of Collaboration in Pair Programming.” In 2009 International Symposium on Empirical Software Engineering and Measurement (ESER’09). Institute of Electrical; Electronics Engineers (IEEE), 2009. doi:10.1109/esem.2009.5315996. Reports that pairs with different levels of a given personality trait communicated more intensively.

[Wang2018] Wang, April Y., Ryan Mitts, Philip J. Guo, and Parmit K. Chilana. “Mismatch of Expectations: How Modern Learning Resources Fail Conversational Programmers.” In 2018 Conference on Human Factors in Computing Systems (CHI’18). Association for Computing Machinery (ACM), 2018. doi:10.1145/3173574.3174085. Reports that learning resources don’t really help conversational programmers (those who learn coding to take part in technical discussions).

[Ward2015] Ward, James. Adventures in Stationery: A Journey Through Your Pencil Case. Profile Books, 2015. A wonderful look at the everyday items that would be in your desk drawer if someone hadn’t walked off with them.

[Wats2014] Watson, Christopher, and Frederick W. B. Li. “Failure Rates in Introductory Programming Revisited.” In 2014 Conference on Innovation and Technology in Computer Science Education (ITiCSE’14). Association for Computing Machinery (ACM), 2014. doi:10.1145/2591708.2591749. A larger version of an earlier study that found an average of one third of students fail CS1.

[Watt2014] Watters, Audrey. The Monsters of Education Technology. CreateSpace, 2014. A collection of essays about the history of educational technology and the exaggerated claims repeatedly made for it.

[Wein2018a] Weinstein, Yana, Christopher R. Madan, and Megan A. Sumeracki. “Teaching the Science of Learning.” Cognitive Research: Principles and Implications 3, no. 1 (January 2018). doi:10.1186/s41235-017-0087-y. A tutorial review of six evidence-based learning practices.

[Wein2018b] Weinstein, Yana, Megan Sumeracki, and Oliver Caviglioli. Understanding How We Learn: A Visual Guide. Routledge, 2018. A short graphical summary of effective learning strategies.

[Wein2017] Weintrop, David, and Uri Wilensky. “Comparing Block-Based and Text-Based Programming in High School Computer Science Classrooms.” ACM Transactions on Computing Education 18, no. 1 (October 2017): 1–25. doi:10.1145/3089799. Reports that students learn faster and better with blocks than with text.

[Weng2015] Wenger-Trayner, Etienne, and Beverly Wenger-Trayner. “Communities of Practice: A Brief Introduction.” http://wenger-trayner.com/intro-to-cops/, 2015. A brief summary of what communities of practice are and aren’t.

[Wibu2016] Wiburg, Karin, Julia Parra, Gaspard Mucundanyi, Jennifer Green, and Nate Shaver, eds. The Little Book of Learning Theories. Second. CreateSpace, 2016. Presents brief summaries of various theories of learning.

[Wigg2005] Wiggins, Grant, and Jay McTighe. Understanding by Design. Association for Supervision & Curriculum Development (ASCD), 2005. A lengthy presentation of reverse instructional design.

[Wilc2018] Wilcox, Chris, and Albert Lionelle. “Quantifying the Benefits of Prior Programming Experience in an Introductory Computer Science Course.” In 2018 Technical Symposium on Computer Science Education (SIGCSE’18). Association for Computing Machinery (ACM), 2018. doi:10.1145/3159450.3159480. Reports that students with prior experience outscore students without in CS1, but there is no significant difference in performance by the end of CS2; also finds that female students with prior exposure outperform their male peers in all areas, but are consistently less confident in their abilities.

[Wile2002] Wiley, David. “The Reusability Paradox.” http://opencontent.org/docs/paradox.html, 2002. Summarizes the tension between learning objects being effective and reusable.

[Wilk2011] Wilkinson, Richard, and Kate Pickett. The Spirit Level: Why Greater Equality Makes Societies Stronger. Bloomsbury Press, 2011. Presents evidence that inequality harms everyone, both economically and otherwise.

[Will2010] Willingham, Daniel T. Why Don’t Students Like School?: A Cognitive Scientist Answers Questions About How the Mind Works and What It Means for the Classroom. Jossey-Bass, 2010. A cognitive scientist looks at how the mind works in the classroom.

[Wils2016] Wilson, Greg. “Software Carpentry: Lessons Learned.” F1000Research, January 2016. doi:10.12688/f1000research.3-62.v2. A history and analysis of Software Carpentry.

[Wils2007] Wilson, Karen, and James H. Korn. “Attention During Lectures: Beyond Ten Minutes.” Teaching of Psychology 34, no. 2 (June 2007): 85–89. doi:10.1080/00986280701291291. Reports little support for the claim that students only have a 10–15 minute attention span (though there is lots of individual variation).

[Wlod2017] Wlodkowski, Raymond J., and Margery B. Ginsberg. Enhancing Adult Motivation to Learn: A Comprehensive Guide for Teaching All Adults. Jossey-Bass, 2017. The standard reference for understanding adult motivation.

[Xie2019] Xie, Benjamin, Dastyni Loksa, Greg L. Nelson, Matthew J. Davidson, Dongsheng Dong, Harrison Kwik, Alex Hui Tan, Leanne Hwa, Min Li, and Amy J. Ko. “A Theory of Instruction for Introductory Programming Skills.” Computer Science Education 29, nos. 2-3 (January 2019): 205–53. doi:10.1080/08993408.2019.1565235. Lays out a four-part theory for teaching novices based on reading vs. writing and code vs. templates.

[Yada2016] Yadav, Aman, Sarah Gretter, Susanne Hambrusch, and Phil Sands. “Expanding Computer Science Education in Schools: Understanding Teacher Experiences and Challenges.” Computer Science Education 26, no. 4 (December 2016): 235–54. doi:10.1080/08993408.2016.1257418. Summarizes feedback from K-12 teachers on what they need by way of preparation and support.

[Yang2015] Yang, Yu-Fen, and Yuan-Yu Lin. “Online Collaborative Note-Taking Strategies to Foster EFL Beginners’ Literacy Development.” System 52 (August 2015): 127–38. doi:10.1016/j.system.2015.05.006. Reports that students using collaborative note taking when learning English as a foreign language do better than those who don’t.


  1. Neil C. C. Brown and Greg Wilson, “Ten Quick Tips for Teaching Programming,” PLoS Computational Biology 14, no. 4 (April 2018), doi:10.1371/journal.pcbi.1006023.

  2. James M. Lang, Small Teaching: Everyday Lessons from the Science of Learning (Jossey-Bass, 2016).

  3. Therese Huston, Teaching What You Don’t Know (Harvard University Press, 2012).

  4. Joseph Bergin et al., Pedagogical Patterns: Advice for Educators (CreateSpace, 2012); Doug Lemov, Teach Like a Champion 2.0: 62 Techniques That Put Students on the Path to College (Jossey-Bass, 2014); Claire Howell Major, Michael S. Harris, and Tod Zakrajsek, Teaching for Learning: 101 Intentionally Designed Educational Activities to Put Students on the Path to Success (Routledge, 2015); Stephen D. Brookfield and Stephen Preskill, The Discussion Book: 50 Great Ways to Get People Talking (Jossey-Bass, 2016); Gail Taylor Rice, Hitting Pause: 65 Lecture Breaks to Refresh and Reinforce Learning (Stylus Publishing, 2018); Yana Weinstein, Megan Sumeracki, and Oliver Caviglioli, Understanding How We Learn: A Visual Guide (Routledge, 2018).

  5. Pedro De Bruyckere, Paul A. Kirschner, and Casper D. Hulshof, Urban Myths About Learning and Education (Academic Press, 2015).

  6. David Didau and Nick Rose, What Every Teacher Needs to Know About Psychology (John Catt Educational, 2016).

  7. Seymour A. Papert, Mindstorms: Children, Computers, and Powerful Ideas, Second (Basic Books, 1993).

  8. Matthew B. Crawford, Shop Class as Soulcraft: An Inquiry into the Value of Work (Penguin, 2010).

  9. Elizabeth Green, Building a Better Teacher: How Teaching Works (and How to Teach It to Everyone) (W. W. Norton & Company, 2014); Tressie McMillan Cottom, Lower Ed: The Troubling Rise of for-Profit Colleges in the New Economy (The New Press, 2017); Audrey Watters, The Monsters of Education Technology (CreateSpace, 2014).

  10. Michael Jacoby Brown, Building Powerful Community Organizations: A Personal Guide to Creating Groups That Can Solve Problems and Change the World (Long Haul Press, 2007).

  11. Mary Lynn Manns and Linda Rising, Fearless Change: Patterns for Introducing New Ideas (Addison-Wesley, 2015).

  12. Mark Guzdial, Learner-Centered Design of Computing Education: Research on Computing for Everyone (Morgan & Claypool Publishers, 2015); Orit Hazzan, Tami Lapidot, and Noa Ragonis, Guide to Teaching Computer Science: An Activity-Based Approach, Second (Springer, 2014); Sue Sentance, Erik Barendsen, and Carsten Schulte, eds., Computer Science Education: Perspectives on Teaching and Learning in School (Bloomsbury Press, 2018); Sally Fincher and Anthony Robins, eds., The Cambridge Handbook of Computing Education Research (Cambridge University Press, 2019); National Academies of Sciences, Engineering, and Medicine, How People Learn Ii: Learners, Contexts, and Cultures (National Academies Press, 2018).

  13. Patricia Benner, From Novice to Expert: Excellence and Power in Clinical Nursing Practice (Pearson, 2000).

  14. Derek A. Muller et al., “Saying the Wrong Thing: Improving Learning with Multimedia by Including Misconceptions,” Journal of Computer Assisted Learning 24, no. 2 (July 2007): 144–55, doi:10.1111/j.1365-2729.2007.00248.x.

  15. Slava Kalyuga et al., “The Expertise Reversal Effect,” Educational Psychologist 38, no. 1 (March 2003): 23–31, doi:10.1207/s15326985ep3801_4.

  16. Brian W. Kernighan and P. J. Plauger, The Elements of Programming Style, Second (McGraw-Hill, 1978); Brian W. Kernighan and Rob Pike, The Unix Programming Environment (Prentice-Hall, 1983); Brian W. Kernighan and Dennis M. Ritchie, The c Programming Language, Second (Prentice-Hall, 1988).

  17. Chris Fehily, SQL: Visual Quickstart Guide, Third (Peachpit Press, 2008).

  18. Eric J. Ray and Deborah S. Ray, Unix and Linux: Visual Quickstart Guide, Fifth (Peachpit Press, 2014).

  19. Mark Guzdial, “Top 10 Myths About Teaching Computer Science” (https://cacm.acm.org/blogs/blog-cacm/189498-top-10-myths-about-teaching-computer-science/fulltext, 2015); Elizabeth Patitsas et al., “Evidence That Computer Science Grades Are Not Bimodal,” in 2016 International Computing Education Research Conference (ICER’16) (Association for Computing Machinery (ACM), 2016), doi:10.1145/2960310.2960312.

  20. Bobby Ojose, Common Misconceptions in Mathematics: Strategies to Correct Them (UPA, 2015).

  21. Karen Wilson and James H. Korn, “Attention During Lectures: Beyond Ten Minutes,” Teaching of Psychology 34, no. 2 (June 2007): 85–89, doi:10.1080/00986280701291291.

  22. David Hestenes, Malcolm Wells, and Gregg Swackhamer, “Force Concept Inventory,” The Physics Teacher 30, no. 3 (March 1992): 141–58, doi:10.1119/1.2343497.

  23. Richard R. Hake, “Interactive Engagement Versus Traditional Methods: A Six-Thousand-Student Survey of Mechanics Test Data for Introductory Physics Courses,” American Journal of Physics 66, no. 1 (January 1998): 64–74, doi:10.1119/1.18809.

  24. Allison Elliott Tew and Mark Guzdial, “The FCS1: A Language Independent Assessment of CS1 Knowledge,” in 2011 Technical Symposium on Computer Science Education (SIGCSE’11) (Association for Computing Machinery (ACM), 2011), doi:10.1145/1953163.1953200.

  25. Miranda C. Parker, Mark Guzdial, and Shelly Engleman, “Replication, Validation, and Use of a Language Independent CS1 Knowledge Assessment,” in 2016 International Computing Education Research Conference (ICER’16) (Association for Computing Machinery (ACM), 2016), doi:10.1145/2960310.2960316.

  26. Sally Hamouda et al., “A Basic Recursion Concept Inventory,” Computer Science Education 27, no. 2 (April 2017): 121–48, doi:10.1080/08993408.2017.1414728.

  27. Benedict Du Boulay, “Some Difficulties of Learning to Program,” Journal of Educational Computing Research 2, no. 1 (February 1986): 57–73, doi:10.2190/3lfx-9rrf-67t8-uvk9.

  28. Juha Sorva, “Notional Machines and Introductory Programming Education,” ACM Transactions on Computing Education 13, no. 2 (June 2013): 1–31, doi:10.1145/2483710.2483713.

  29. Lewis Carroll Epstein, Thinking Physics: Understandable Practical Reality (Insight Press, 2002).

  30. Matti Tedre and Erkki Sutinen, “Three Traditions of Computing: What Educators Should Know,” Computer Science Education 18, no. 3 (September 2008): 153–70, doi:10.1080/08993400802332332.

  31. Philip Stark and Richard Freishtat, “An Evaluation of Course Evaluations,” ScienceOpen Research, September 2014, doi:10.14293/s2199-1006.1.sor-edu.aofrqa.v1; Bob Uttl, Carmela A. White, and Daniela Wong Gonzalez, “Meta-Analysis of Faculty’s Teaching Effectiveness: Student Evaluation of Teaching Ratings and Student Learning Are Not Related,” Studies in Educational Evaluation 54 (September 2017): 22–42, doi:10.1016/j.stueduc.2016.08.007.

  32. Chris Parnin, Janet Siegmund, and Norman Peitek, “On the Nature of Programmer Expertise,” in Psychology of Programming Interest Group Workshop 2017, 2017.

  33. Esto definitivamente no es como nuestro cerebro trabaja, pero es una metáfora útil.

  34. Marian Petre and André van der Hoek, Software Design Decoded: 66 Ways Experts Think (MIT Press, 2016).

  35. Mitchell J. Nathan and Anthony Petrosino, “Expert Blind Spot Among Preservice Teachers,” American Educational Research Journal 40, no. 4 (January 2003): 905–28, doi:10.3102/00028312040004905.

  36. Herbert W. Marsh and John Hattie, “The Relation Between Research Productivity and Teaching Effectiveness: Complementary, Antagonistic, or Independent Constructs?” Journal of Higher Education 73, no. 5 (2002): 603–41, doi:10.1353/jhe.2002.0047.

  37. Jeroen Keppens and David Hay, “Concept Map Assessment for Teaching Computer Programming,” Computer Science Education 18, no. 1 (March 2008): 31–42, doi:10.1080/08993400701864880.

  38. Martin J. Eppler, “A Comparison Between Concept Maps, Mind Maps, Conceptual Diagrams, and Visual Metaphors as Complementary Tools for Knowledge Construction and Sharing,” Information Visualization 5, no. 3 (June 2006): 202–10, doi:10.1057/palgrave.ivs.9500131.

  39. Andrew Abela, “Chart Suggestions - a Thought Starter” (http://extremepresentation.typepad.com/files/choosing-a-good-chart-09.pdf, 2009).

  40. Parafraseando a Lady Windermere, obra de Oscar Wilde, las personas a menudo no saben lo que piensan hasta que se escuchan a sí mismas decirlo

  41. George A. Miller, “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information,” Psychological Review 63, no. 2 (1956): 81–97, doi:10.1037/h0043158.

  42. Michelle D. Miller, Minds Online: Teaching Effectively with Technology (Harvard University Press, 2016).

  43. Didau and Rose, What Every Teacher Needs to Know About Psychology.

  44. Bergin et al., Pedagogical Patterns.

  45. Marja Kuittinen and Jorma Sajaniemi, “Teaching Roles of Variables in Elementary Programming Courses,” ACM SIGCSE Bulletin 36, no. 3 (September 2004): 57, doi:10.1145/1026487.1008014; Pauli Byckling, Petri Gerdt, and Jorma Sajaniemi, “Roles of Variables in Object-Oriented Programming,” in 2005 Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA’05) (Association for Computing Machinery (ACM), 2005), doi:10.1145/1094855.1094972; Jorma Sajaniemi et al., “Roles of Variables in Three Programming Paradigms,” Computer Science Education 16, no. 4 (December 2006): 261–79, doi:10.1080/08993400600874584.

  46. Brooke N. Macnamara, David Z. Hambrick, and Frederick L. Oswald, “Deliberate Practice and Performance in Music, Games, Sports, Education, and Professions: A Meta-Analysis,” Psychological Science 25, no. 8 (July 2014): 1608–18, doi:10.1177/0956797614535810.

  47. K. Anders Ericsson, “Summing up Hours of Any Type of Practice Versus Identifying Optimal Practice Activities,” Perspectives on Psychological Science 11, no. 3 (May 2016): 351–54, doi:10.1177/1745691616635600.

  48. Mauro Cherubini et al., “Let’s Go to the Whiteboard: How and Why Software Developers Use Drawings,” in 2007 Conference on Human Factors in Computing Systems (CHI’07) (Association for Computing Machinery (ACM), 2007), doi:10.1145/1240624.1240714.

  49. Mi agradecimiento a Warren Code por presentarme este ejemplo.

  50. A more complete model would also include your senses of touch, smell, and taste, but we’ll ignore those for now.

  51. Richard E. Mayer and Roxana Moreno, “Nine Ways to Reduce Cognitive Load in Multimedia Learning,” Educational Psychologist 38, no. 1 (March 2003): 43–52, doi:10.1207/s15326985ep3801_6.

  52. Robert K. Atkinson et al., “Learning from Examples: Instructional Principles from the Worked Examples Research,” Review of Educational Research 70, no. 2 (June 2000): 181–214, doi:10.3102/00346543070002181.

  53. Eunmo Sung and Richard E. Mayer, “When Graphics Improve Liking but Not Learning from Online Lessons,” Computers in Human Behavior 28, no. 5 (September 2012): 1618–25, doi:10.1016/j.chb.2012.03.026.

  54. Eliane Stampfer and Kenneth R. Koedinger, “When Seeing Isn’t Believing: Influences of Prior Conceptions and Misconceptions,” in 2013 Annual Meeting of the Cognitive Science Society (CogSci’13), 2013; Eliane Stampfer Wiese and Kenneth R. Koedinger, “Investigating Scaffolds for Sense Making in Fraction Addition and Comparison,” in 2014 Annual Conference of the Cognitive Science Society (CogSci’14), 2014.

  55. Paul A. Kirschner, John Sweller, and Richard E. Clark, “Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching,” Educational Psychologist 41, no. 2 (June 2006): 75–86, doi:10.1207/s15326985ep4102_1.

  56. Named after one of its creators.

  57. Dale Parsons and Patricia Haden, “Parson’s Programming Puzzles: A Fun and Effective Learning Tool for First Programming Courses,” in 2006 Australasian Conference on Computing Education (ACE’06) (Australian Computer Society, 2006), 157–63.

  58. Barbara J. Ericson, Lauren E. Margulieux, and Jochen Rick, “Solving Parsons Problems Versus Fixing and Writing Code,” in 2017 Koli Calling Conference on Computing Education Research (Koli’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3141880.3141895.

  59. Allan Collins, John Seely Brown, and Ann Holum, “Cognitive Apprenticeship: Making Thinking Visible,” American Educator 6 (1991): 38–46; Michael E. Caspersen and Jens Bennedsen, “Instructional Design of a Programming Course,” in 2007 International Computing Education Research Conference (ICER’07) (Association for Computing Machinery (ACM), 2007), doi:10.1145/1288580.1288595.

  60. For a long time, I believed that the variable holding the value a function was going to return had to be called result because my teacher always used that name in examples.

  61. Lauren E. Margulieux, Richard Catrambone, and Mark Guzdial, “Employing Subgoals in Computer Programming Education,” Computer Science Education 26, no. 1 (January 2016): 44–67, doi:10.1080/08993408.2016.1144429; Briana B. Morrison et al., “Subgoals Help Students Solve Parsons Problems,” in 2016 Technical Symposium on Computer Science Education (SIGCSE’16) (Association for Computing Machinery (ACM), 2016), doi:10.1145/2839509.2844617.

  62. Lauren E. Margulieux, Mark Guzdial, and Richard Catrambone, “Subgoal-Labeled Instructional Material Improves Performance and Transfer in Learning to Develop Mobile Applications,” in 2012 International Computing Education Research Conference (ICER’12) (ACM Press, 2012), 71–78, doi:10.1145/2361276.2361291.

  63. John Carroll et al., “The Minimal Manual,” Human-Computer Interaction 3, no. 2 (June 1987): 123–53, doi:10.1207/s15327051hci0302_2; John Carroll, “Creating Minimalist Instruction,” International Journal of Designs for Learning 5, no. 2 (November 2014), doi:10.14434/ijdl.v5i2.12887.

  64. Ard W. Lazonder and Hans van der Meij, “The Minimal Manual: Is Less Really More?” International Journal of Man-Machine Studies 39, no. 5 (November 1993): 729–52, doi:10.1006/imms.1993.1081.

  65. Carroll, “Creating Minimalist Instruction.”

  66. Raina Mason, Carolyn Seton, and Graham Cooper, “Applying Cognitive Load Theory to the Redesign of a Conventional Database Systems Course,” Computer Science Education 26, no. 1 (January 2016): 68–87, doi:10.1080/08993408.2016.1160597.

  67. Kirschner, Sweller, and Clark, “Why Minimal Guidance During Instruction Does Not Work.”

  68. Slava Kalyuga and Anne-Marie Singh, “Rethinking the Boundaries of Cognitive Load Theory in Complex Learning,” Educational Psychology Review 28, no. 4 (December 2015): 831–52, doi:10.1007/s10648-015-9352-0.

  69. Paul A. Kirschner et al., “From Cognitive Load Theory to Collaborative Cognitive Load Theory,” International Journal of Computer-Supported Collaborative Learning, April 2018, doi:10.1007/s11412-018-9277-y.

  70. Rebecca A. Markovits and Yana Weinstein, “Can Cognitive Processes Help Explain the Success of Instructional Techniques Recommended by Behavior Analysts?” NPJ Science of Learning 3, no. 1 (January 2018), doi:10.1038/s41539-017-0018-1.

  71. Karin Wiburg et al., eds., The Little Book of Learning Theories, Second (CreateSpace, 2016).

  72. Ben Skudder and Andrew Luxton-Reilly, “Worked Examples in Computer Science,” in 2014 Australasian Computing Education Conference, (ACE’14), 2014.

  73. Jean M. Griffin, “Learning by Taking Apart,” in 2016 Conference on Information Technology Education (SIGITE’16) (ACM Press, 2016), doi:10.1145/2978192.2978231.

  74. Richard E. Mayer, Multimedia Learning, Second (Cambridge University Press, 2009); Miller, Minds Online.

  75. National Academies of Sciences, Engineering, and Medicine, How People Learn Ii.

  76. S. Freeman et al., “Active Learning Increases Student Performance in Science, Engineering, and Mathematics,” Proc. National Academy of Sciences 111, no. 23 (May 2014): 8410–5, doi:10.1073/pnas.1319030111.

  77. Saundra Yancey McGuire, Teach Students How to Learn: Strategies You Can Incorporate into Any Course to Improve Student Metacognition, Study Skills, and Motivation (Stylus Publishing, 2015); Toshiya Miyatsu, Khuyen Nguyen, and Mark A. McDaniel, “Five Popular Study Strategies: Their Pitfalls and Optimal Implementations,” Perspectives on Psychological Science 13, no. 3 (May 2018): 390–407, doi:10.1177/1745691617710510.

  78. Giovanni Sala and Fernand Gobet, “Does Far Transfer Exist? Negative Evidence from Chess, Music, and Working Memory Training,” Current Directions in Psychological Science 26, no. 6 (October 2017): 515–20, doi:10.1177/0963721417712760.

  79. Mary L. Gick and Keith J. Holyoak, “The Cognitive Basis of Knowledge Transfer,” in Transfer of Learning: Contemporary Research and Applications, ed. S. J. Cormier and J. D. Hagman (Elsevier, 1987), 9–46, doi:10.1016/b978-0-12-188950-0.50008-4.

  80. Markovits and Weinstein, “Can Cognitive Processes Help Explain the Success of Instructional Techniques Recommended by Behavior Analysts?”

  81. Yana Weinstein, Christopher R. Madan, and Megan A. Sumeracki, “Teaching the Science of Learning,” Cognitive Research: Principles and Implications 3, no. 1 (January 2018), doi:10.1186/s41235-017-0087-y; Weinstein, Sumeracki, and Caviglioli, Understanding How We Learn.

  82. Sean H. K. Kang, “Spaced Repetition Promotes Efficient and Effective Learning,” Policy Insights from the Behavioral and Brain Sciences 3, no. 1 (January 2016): 12–19, doi:10.1177/2372732215624708.

  83. Eric Matthes, “Python Flash Cards: Syntax, Concepts, and Examples” (No Starch Press, 2019).

  84. Miller, Minds Online.

  85. Jeffrey D. Karpicke and Henry L. Roediger, “The Critical Importance of Retrieval for Learning,” Science 319, no. 5865 (February 2008): 966–68, doi:10.1126/science.1152408.

  86. Miller, Minds Online.

  87. Janet Metcalfe, “Learning from Errors,” Annual Review of Psychology 68, no. 1 (January 2016): 465–89, doi:10.1146/annurev-psych-010416-044022.

  88. Doug Rohrer, Robert F. Dedrick, and Sandra Stershic, “Interleaved Practice Improves Mathematics Learning,” Journal of Educational Psychology 107, no. 3 (2015): 900–908, doi:10.1037/edu0000001.

  89. Katerine Bielaczyc, Peter L. Pirolli, and Ann L. Brown, “Training in Self-Explanation and Self-Regulation Strategies: Investigating the Effects of Knowledge Acquisition Activities on Problem Solving,” Cognition and Instruction 13, no. 2 (June 1995): 221–52, doi:10.1207/s1532690xci1302_3.

  90. Michelene T. H. Chi et al., “Self-Explanations: How Students Study and Use Examples in Learning to Solve Problems,” Cognitive Science 13, no. 2 (April 1989): 145–82, doi:10.1207/s15516709cog1302_1.

  91. Katherine A. Rawson, Ruthann C. Thomas, and Larry L. Jacoby, “The Power of Examples: Illustrative Examples Enhance Conceptual Learning of Declarative Concepts,” Educational Psychology Review 27, no. 3 (June 2014): 483–504, doi:10.1007/s10648-014-9273-3.

  92. Mayer and Moreno, “Nine Ways to Reduce Cognitive Load in Multimedia Learning.”

  93. Evan Robinson, “Why Crunch Mode Doesn’t Work: 6 Lessons” (http://www.igda.org/articles/erobinson_crunch.php; International Game Developers Association (IGDA), 2005).

  94. Miller, Minds Online.

  95. Mihaly Csikszentmihaly, Flow: The Psychology of Optimal Experience (Harper, 2008).

  96. Harald Søndergaard and Raoul A. Mulder, “Collaborative Learning Through Formative Peer Review: Pedagogy, Programs and Potential,” Computer Science Education 22, no. 4 (December 2012): 343–67, doi:10.1080/08993408.2012.728041.

  97. Deborah B. Kaufman and Richard M. Felder, “Accounting for Individual Effort in Cooperative Learning Teams,” Journal of Engineering Education 89, no. 2 (2000).

  98. Pablo Frank-Bolton and Rahul Simha, “Docendo Discimus: Students Learn by Teaching Peers Through Video,” in 2018 Technical Symposium on Computer Science Education (SIGCSE’18) (Association for Computing Machinery (ACM), 2018), doi:10.1145/3159450.3159466.

  99. Chinmay Kulkarni et al., “Peer and Self Assessment in Massive Online Classes,” ACM Transactions on Computer-Human Interaction 20, no. 6 (December 2013): 1–31, doi:10.1145/2505057.

  100. Dwayne E. Paré and Steve Joordens, “Peering into Large Lectures: Examining Peer and Expert Mark Agreement Using peerScholar, an Online Peer Assessment Tool,” Journal of Computer Assisted Learning 24, no. 6 (October 2008): 526–40, doi:10.1111/j.1365-2729.2008.00290.x.

  101. Paul A. Kirschner and Jeroen J. G. van Merriënboer, “Do Learners Really Know Best? Urban Legends in Education,” Educational Psychologist 48, no. 3 (July 2013): 169–83, doi:10.1080/00461520.2013.804395.

  102. Guzdial, “Top 10 Myths About Teaching Computer Science.”

  103. Grant Wiggins and Jay McTighe, Understanding by Design (Association for Supervision & Curriculum Development (ASCD), 2005); John Biggs and Catherine Tang, Teaching for Quality Learning at University (Open University Press, 2011); L. Dee Fink, Creating Significant Learning Experiences: An Integrated Approach to Designing College Courses (Jossey-Bass, 2013).

  104. Jay McTighe and Grant Wiggins, “Understanding by Design Framework” (http://www.ascd.org/ASCD/pdf/siteASCD/publications/UbD_WhitePaper0312.pdf; Association for Supervision & Curriculum Development (ASCD), 2013).

  105. Green, Building a Better Teacher.

  106. James C. Scott, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed (Yale University Press, 1998).

  107. David Lorge Parnas and Paul C. Clements, “A Rational Design Process: How and Why to Fake It,” IEEE Transactions on Software Engineering SE-12, no. 2 (February 1986): 251–57, doi:10.1109/tse.1986.6312940.

  108. Lorin W. Anderson and David R. Krathwohl, eds., A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives (Longman, 2001).

  109. Susana Masapanta-Carrión and J. Ángel Velázquez-Iturbide, “A Systematic Review of the Use of Bloom’s Taxonomy in Computer Science Education,” in 2018 Technical Symposium on Computer Science Education (SIGCSE’18) (Association for Computing Machinery (ACM), 2018), doi:10.1145/3159450.3159491.

  110. Daniel T. Willingham, Why Don’t Students Like School?: A Cognitive Scientist Answers Questions About How the Mind Works and What It Means for the Classroom (Jossey-Bass, 2010).

  111. Fink, Creating Significant Learning Experiences.

  112. David Wiley, “The Reusability Paradox” (http://opencontent.org/docs/paradox.html, 2002).

  113. Sue Sentance, Jane Waite, and Maria Kallia, “Teachers’ Experiences of Using PRIMM to Teach Programming in School,” in 2019 Technical Symposium on Computer Science Education (SIGCSE’19) (ACM Press, 2019), doi:10.1145/3287324.3287477.

  114. Mackenzie Leake and Colleen M. Lewis, “Recommendations for Designing CS Resource Sharing Sites for All Teachers,” in 2017 Technical Symposium on Computer Science Education (SIGCSE’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3017680.3017780.

  115. Matthew J. Koehler, Punya Mishra, and William Cain, “What Is Technological Pedagogical Content Knowledge (TPACK)?” Journal of Education 193, no. 3 (2013): 13–19, doi:10.1177/002205741319300303.

  116. Richard E. Mayer, “Teaching of Subject Matter,” Annual Review of Psychology 55, no. 1 (February 2004): 715–44, doi:10.1146/annurev.psych.55.082602.133124.

  117. Joseph Henrich, Steven J. Heine, and Ara Norenzayan, “The Weirdest People in the World?” Behavioral and Brain Sciences 33, nos. 2-3 (June 2010): 61–83, doi:10.1017/s0140525x0999152x.

  118. Andrew Luxton-Reilly et al., “Developing Assessments to Determine Mastery of Programming Fundamentals,” in 2017 Conference on Innovation and Technology in Computer Science Education (ITiCSE’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3174781.3174784.

  119. Kathryn M. Rich et al., “K-8 Learning Trajectories Derived from Research Literature,” in 2017 International Computing Education Research Conference (ICER’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3105726.3106166.

  120. Ibid.

  121. Jens Bennedsen and Michael E. Caspersen, “Failure Rates in Introductory Programming,” ACM SIGCSE Bulletin 39, no. 2 (June 2007): 32, doi:10.1145/1272848.1272879; Christopher Watson and Frederick W. B. Li, “Failure Rates in Introductory Programming Revisited,” in 2014 Conference on Innovation and Technology in Computer Science Education (ITiCSE’14) (Association for Computing Machinery (ACM), 2014), doi:10.1145/2591708.2591749.

  122. Chris Wilcox and Albert Lionelle, “Quantifying the Benefits of Prior Programming Experience in an Introductory Computer Science Course,” in 2018 Technical Symposium on Computer Science Education (SIGCSE’18) (Association for Computing Machinery (ACM), 2018), doi:10.1145/3159450.3159480.

  123. Michael McCracken et al., “A Multi-National, Multi-Institutional Study of Assessment of Programming Skills of First-Year CS Students,” in 2001 Conference on Innovation and Technology in Computer Science Education (ITiCSE’01) (Association for Computing Machinery (ACM), 2001), doi:10.1145/572133.572137.

  124. Ian Utting et al., “A Fresh Look at Novice Programmers’ Performance and Their Teachers’ Expectations,” in 2013 Conference on Innovation and Technology in Computer Science Education (ITiCSE’13) (ACM Press, 2013), doi:10.1145/2543882.2543884.

  125. Roy D. Pea, “Language-Independent Conceptual ‘Bugs’ in Novice Programming,” Journal of Educational Computing Research 2, no. 1 (February 1986): 25–36, doi:10.2190/689t-1r2a-x4w4-29j2.

  126. Juha Sorva, “Misconceptions and the Beginner Programmer,” in Computer Science Education: Perspectives on Teaching and Learning in School, ed. Sue Sentance, Erik Barendsen, and Carsten Schulte (Bloomsbury Press, 2018).

  127. Yizhou Qian and James Lehman, “Students’ Misconceptions and Other Difficulties in Introductory Programming,” ACM Transactions on Computing Education 18, no. 1 (October 2017): 1–24, doi:10.1145/3077618.

  128. Tobias Kohn, “Variable Evaluation: An Exploration of Novice Programmers’ Understanding and Common Misconceptions,” in 2017 Technical Symposium on Computer Science Education (SIGCSE’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3017680.3017724.

  129. Neil C. C. Brown and Amjad Altadmri, “Novice Java Programming Mistakes,” ACM Transactions on Computing Education 17, no. 2 (May 2017), doi:10.1145/2994154.

  130. Ibid.

  131. Thomas H. Park, Brian Dorn, and Andrea Forte, “An Analysis of HTML and CSS Syntax Errors in a Web Development Course,” ACM Transactions on Computing Education 15, no. 1 (March 2015): 1–21, doi:10.1145/2700514.

  132. Elliot Soloway and Kate Ehrlich, “Empirical Studies of Programming Knowledge,” IEEE Transactions on Software Engineering SE-10, no. 5 (September 1984): 595–609, doi:10.1109/tse.1984.5010283; Elliot Soloway, “Learning to Program = Learning to Construct Mechanisms and Explanations,” Communications of the ACM 29, no. 9 (September 1986): 850–58, doi:10.1145/6592.6594.

  133. Benjamin Xie et al., “A Theory of Instruction for Introductory Programming Skills,” Computer Science Education 29, nos. 2-3 (January 2019): 205–53, doi:10.1080/08993408.2019.1565235.

  134. Orna Muller, David Ginat, and Bruria Haberman, “Pattern-Oriented Instruction and Its Influence on Problem Decomposition and Solution Construction,” in 2007 Technical Symposium on Computer Science Education (SIGCSE’07) (Association for Computing Machinery (ACM), 2007), doi:10.1145/1268784.1268830.

  135. Margulieux, Guzdial, and Catrambone, “Subgoal-Labeled Instructional Material Improves Performance and Transfer in Learning to Develop Mobile Applications”; Margulieux, Catrambone, and Guzdial, “Employing Subgoals in Computer Programming Education.”

  136. Renée McCauley et al., “Debugging: A Review of the Literature from an Educational Perspective,” Computer Science Education 18, no. 2 (June 2008): 67–92, doi:10.1080/08993400802114581.

  137. Raymond Lister et al., “A Multi-National Study of Reading and Tracing Skills in Novice Programmers,” in 2004 Conference on Innovation and Technology in Computer Science Education (ITiCSE’04) (Association for Computing Machinery (ACM), 2004), doi:10.1145/1044550.1041673; Raymond Lister, Colin Fidge, and Donna Teague, “Further Evidence of a Relationship Between Explaining, Tracing and Writing Skills in Introductory Programming,” ACM SIGCSE Bulletin 41, no. 3 (August 2009): 161, doi:10.1145/1595496.1562930.

  138. Brian Harrington and Nick Cheng, “Tracing Vs. Writing Code: Beyond the Learning Hierarchy,” in 2018 Technical Symposium on Computer Science Education (SIGCSE’18) (Association for Computing Machinery (ACM), 2018), doi:10.1145/3159450.3159530.

  139. Sue Fitzgerald et al., “Debugging: Finding, Fixing and Flailing, a Multi-Institutional Study of Novice Debuggers,” Computer Science Education 18, no. 2 (June 2008): 93–116, doi:10.1080/08993400802114508; Laurie Murphy et al., “Debugging: The Good, the Bad, and the Quirky - a Qualitative Analysis of Novices’ Strategies,” ACM SIGCSE Bulletin 40, no. 1 (February 2008): 163, doi:10.1145/1352322.1352191.

  140. Basma S. Alqadi and Jonathan I. Maletic, “An Empirical Study of Debugging Patterns Among Novice Programmers,” in 2017 Technical Symposium on Computer Science Education (SIGCSE’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3017680.3017761.

  141. Victor R. Basili and Richard W. Selby, “Comparing the Effectiveness of Software Testing Strategies,” IEEE Transactions on Software Engineering SE-13, no. 12 (December 1987): 1278–96, doi:10.1109/tse.1987.232881; Chris F. Kemerer and Mark C. Paulk, “The Impact of Design and Code Reviews on Software Quality: An Empirical Study Based on PSP Data,” IEEE Transactions on Software Engineering 35, no. 4 (July 2009): 534–50, doi:10.1109/tse.2009.27; Alberto Bacchelli and Christian Bird, “Expectations, Outcomes, and Challenges of Modern Code Review,” in 2013 International Conference on Software Engineering (ICSE’13), 2013.

  142. Martijn Stegeman, Erik Barendsen, and Sjaak Smetsers, “Towards an Empirically Validated Model for Assessment of Code Quality,” in 2014 Koli Calling Conference on Computing Education Research (Koli’14) (Association for Computing Machinery (ACM), 2014), doi:10.1145/2674683.2674702; Martijn Stegeman, Erik Barendsen, and Sjaak Smetsers, “Designing a Rubric for Feedback on Code Quality in Programming Courses,” in 2016 Koli Calling Conference on Computing Education Research (Koli’16) (Association for Computing Machinery (ACM), 2016), doi:10.1145/2999541.2999555.

  143. Kathryn Cunningham et al., “Using Tracing and Sketching to Solve Programming Problems,” in 2017 Conference on International Computing Education Research (ICER’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3105726.3106190.

  144. Adam Scott Carter and Christopher David Hundhausen, “Using Programming Process Data to Detect Differences in Students’ Patterns of Programming,” in 2017 Technical Symposium on Computer Science Education (SIGCSE’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3017680.3017785.

  145. Samuel A. Brian et al., “Planting Bugs: A System for Testing Students’ Unit Tests,” in 2015 Conference on Innovation and Technology in Computer Science Education (ITiCSE’15) (Association for Computing Machinery (ACM), 2015), doi:10.1145/2729094.2742631.

  146. Stephen H. Edwards and Zalia Shams, “Do Student Programmers All Tend to Write the Same Software Tests?” in 2014 Conference on Innovation and Technology in Computer Science Education (ITiCSE’14) (Association for Computing Machinery (ACM), 2014), doi:10.1145/2591708.2591757.

  147. David Weintrop and Uri Wilensky, “Comparing Block-Based and Text-Based Programming in High School Computer Science Classrooms,” ACM Transactions on Computing Education 18, no. 1 (October 2017): 1–25, doi:10.1145/3089799.

  148. John Maloney et al., “The Scratch Programming Language and Environment,” ACM Transactions on Computing Education 10, no. 4 (November 2010): 1–15, doi:10.1145/1868358.1868363.

  149. Chen Chen et al., “The Effects of First Programming Language on College Students’ Computing Attitude and Achievement: A Comparison of Graphical and Textual Languages,” Computer Science Education 29, no. 1 (November 2018): 23–48, doi:10.1080/08993408.2018.1547564.

  150. Efthimia Aivaloglou and Felienne Hermans, “How Kids Code and How We Know,” in 2016 International Computing Education Research Conference (ICER’16) (Association for Computing Machinery (ACM), 2016), doi:10.1145/2960310.2960325.

  151. Shuchi Grover and Satabdi Basu, “Measuring Student Learning in Introductory Block-Based Programming,” in 2017 Technical Symposium on Computer Science Education (SIGCSE’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3017680.3017723; Monika Mladenović, Ivica Boljat, and Žana Žanko, “Comparing Loops Misconceptions in Block-Based and Text-Based Programming Languages at the K-12 Level,” Education and Information Technologies, November 2017, doi:10.1007/s10639-017-9673-3.

  152. Andreas Stefik and Susanna Siebert, “An Empirical Investigation into Programming Language Syntax,” ACM Transactions on Computing Education 13, no. 4 (November 2013): 1–40, doi:10.1145/2534973.

  153. Andreas Stefik et al., “Programming Languages and Learning” (https://quorumlanguage.com/evidence.html, 2017).

  154. Mark Guzdial, “Five Principles for Programming Languages for Learners” (https://cacm.acm.org/blogs/blog-cacm/203554-five-principles-for-programming-languages-for-learners/fulltext, 2016).

  155. Jens Bennedsen and Carsten Schulte, “What Does ‘Objects-First’ Mean?: An International Study of Teachers’ Perceptions of Objects-First,” in 2007 Koli Calling Conference on Computing Education Research (Koli’07), 2007, 21–29.

  156. Juha Sorva and Otto Seppälä, “Research-Based Design of the First Weeks of CS1,” in 2014 Koli Calling Conference on Computing Education Research (Koli’14) (Association for Computing Machinery (ACM), 2014), doi:10.1145/2674683.2674690.

  157. Michael Kölling, “Lessons from the Design of Three Educational Programming Environments,” International Journal of People-Oriented Programming 4, no. 1 (January 2015): 5–32, doi:10.4018/ijpop.2015010102.

  158. Craig S. Miller and Amber Settle, “Some Trouble with Transparency: An Analysis of Student Errors with Object-Oriented Python,” in 2016 International Computing Education Research Conference (ICER’16) (Association for Computing Machinery (ACM), 2016), doi:10.1145/2960310.2960327.

  159. Noa Ragonis and Ronit Shmallo, “On the (Mis)understanding of the this Reference,” in 2017 Technical Symposium on Computer Science Education (SIGCSE’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3017680.3017715.

  160. Stefan Endrikat et al., “How Do API Documentation and Static Typing Affect API Usability?” in 2014 International Conference on Software Engineering (ICSE’14) (ACM Press, 2014), doi:10.1145/2568225.2568299; Lars Fischer and Stefan Hanenberg, “An Empirical Investigation of the Effects of Type Systems and Code Completion on API Usability Using TypeScript and JavaScript in MS Visual Studio,” in 11th Symposium on Dynamic Languages (DLS’15) (ACM Press, 2015), doi:10.1145/2816707.2816720.

  161. Brian W. Kernighan and Rob Pike, The Practice of Programming (Addison-Wesley, 1999).

  162. Johannes Hofmeister, Janet Siegmund, and Daniel V. Holt, “Shorter Identifier Names Take Longer to Comprehend,” in 2017 Conference on Software Analysis, Evolution and Reengineering (SANER’17) (Institute of Electrical; Electronics Engineers (IEEE), 2017), doi:10.1109/saner.2017.7884623.

  163. Gal Beniamini et al., “Meaningful Identifier Names: The Case of Single-Letter Variables,” in 2017 International Conference on Program Comprehension (ICPC’17) (Institute of Electrical; Electronics Engineers (IEEE), 2017), doi:10.1109/icpc.2017.18.

  164. Dave Binkley et al., “The Impact of Identifier Style on Effort and Comprehension,” Empirical Software Engineering 18, no. 2 (May 2012): 219–76, doi:10.1007/s10664-012-9201-4.

  165. Brett A. Becker et al., “Effective Compiler Error Message Enhancement for Novice Programming Students,” Computer Science Education 26, nos. 2-3 (July 2016): 148–75, doi:10.1080/08993408.2016.1225464.

  166. Titus Barik et al., “Do Developers Read Compiler Error Messages?” in 2017 International Conference on Software Engineering (ICSE’17) (Institute of Electrical; Electronics Engineers (IEEE), 2017), doi:10.1109/icse.2017.59.

  167. Guillaume Marceau, Kathi Fisler, and Shriram Krishnamurthi, “Measuring the Effectiveness of Error Messages Designed for Novice Programmers,” in 2011 Technical Symposium on Computer Science Education (SIGCSE’11) (Association for Computing Machinery (ACM), 2011), doi:10.1145/1953163.1953308.

  168. Philip J. Guo, “Online Python Tutor,” in 2013 Technical Symposium on Computer Science Education (SIGCSE’13) (Association for Computing Machinery (ACM), 2013), doi:10.1145/2445196.2445368.

  169. John Stasko et al., eds., Software Visualization: Programming as a Multimedia Experience (MIT Press, 1998); Ibrahim Cetin and Christine Andrews-Larson, “Learning Sorting Algorithms Through Visualization Construction,” Computer Science Education 26, no. 1 (January 2016): 27–43, doi:10.1080/08993408.2016.1160664.

  170. Cunningham et al., “Using Tracing and Sketching to Solve Programming Problems.”

  171. David A. Scanlan, “Structured Flowcharts Outperform Pseudocode: An Experimental Comparison,” IEEE Software 6, no. 5 (September 1989): 28–36, doi:10.1109/52.35587.

  172. Arto Vihavainen, Jonne Airaksinen, and Christopher Watson, “A Systematic Review of Approaches for Teaching Introductory Programming and Their Influence on Success,” in 2014 International Computing Education Research Conference (ICER’14) (Association for Computing Machinery (ACM), 2014), doi:10.1145/2632320.2632349.

  173. Leland Beck and Alexander Chizhik, “Cooperative Learning Instructional Methods for CS1: Design, Implementation, and Evaluation,” ACM Transactions on Computing Education 13, no. 3 (August 2013): 10:1–10:21, doi:10.1145/2492686.

  174. Duane F. Shell et al., “Improving Students’ Learning and Achievement in CS Classrooms Through Computational Creativity Exercises That Integrate Computational and Creative Thinking,” in 2017 Technical Symposium on Computer Science Education (SIGCSE’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3017680.3017718.

  175. Fincher and Robins, The Cambridge Handbook of Computing Education Research.

  176. Petri Ihantola et al., “Educational Data Mining and Learning Analytics in Programming: Literature Review and Case Studies,” in 2016 Conference on Innovation and Technology in Computer Science Education (ITiCSE’16) (Association for Computing Machinery (ACM), 2016), doi:10.1145/2858796.2858798.

  177. Ojose, Common Misconceptions in Mathematics.

  178. Hazzan, Lapidot, and Ragonis, Guide to Teaching Computer Science; Guzdial, Learner-Centered Design of Computing Education; Sentance, Barendsen, and Schulte, Computer Science Education.

  179. Teemu Sirkiä and Juha Sorva, “Exploring Programming Misconceptions: An Analysis of Student Mistakes in Visual Program Simulation Exercises,” in 2012 Koli Calling Conference on Computing Education Research (Koli’12) (Association for Computing Machinery (ACM), 2012), doi:10.1145/2401796.2401799.

  180. Nick Cheng and Brian Harrington, “The Code Mangler: Evaluating Coding Ability Without Writing Any Code,” in 2017 Technical Symposium on Computer Science Education (SIGCSE’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3017680.3017704.

  181. Soloway, “Learning to Program = Learning to Construct Mechanisms and Explanations.”

  182. Kathi Fisler, “The Recurring Rainfall Problem,” in 2014 International Computing Education Research Conference (ICER’14) (Association for Computing Machinery (ACM), 2014), doi:10.1145/2632320.2632346; Simon, “Soloway’s Rainfall Problem Has Become Harder,” in 2013 Conference on Learning and Teaching in Computing and Engineering (Institute of Electrical; Electronics Engineers (IEEE), 2013), doi:10.1109/latice.2013.44; Otto Seppälä et al., “Do We Know How Difficult the Rainfall Problem Is?” in 2015 Koli Calling Conference on Computing Education Research (Koli’15) (ACM Press, 2015), doi:10.1145/2828959.2828963.

  183. Kuittinen and Sajaniemi, “Teaching Roles of Variables in Elementary Programming Courses”; Byckling, Gerdt, and Sajaniemi, “Roles of Variables in Object-Oriented Programming”; Sajaniemi et al., “Roles of Variables in Three Programming Paradigms.”

  184. Luxton-Reilly et al., “Developing Assessments to Determine Mastery of Programming Fundamentals.”

  185. Vihavainen, Airaksinen, and Watson, “A Systematic Review of Approaches for Teaching Introductory Programming and Their Influence on Success.”

  186. Paul Denny et al., “Research This! Questions That Computing Educators Most Want Computing Education Researchers to Answer,” in 2019 Conference on International Computing Education Research (ICER’19) (Association for Computing Machinery (ACM), 2019).

  187. Marc J. Rubin, “The Effectiveness of Live-Coding to Teach Introductory Programming,” in 2013 Technical Symposium on Computer Science Education (SIGCSE’13) (Association for Computing Machinery (ACM), 2013), 651–56, doi:10.1145/2445196.2445388; Lassi Haaranen, “Programming as a Performance - Live-Streaming and Its Implications for Computer Science Education,” in 2017 Conference on Innovation and Technology in Computer Science Education (ITiCSE’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3059009.3059035; Adalbert Gerald Soosai Raj et al., “Role of Live-Coding in Learning Introductory Programming,” in 2018 Koli Calling International Conference on Computing Education Research (Koli’18), 2018, doi:10.1145/3279720.3279725.

  188. Petri Ihantola and Ville Karavirta, “Two-Dimensional Parson’s Puzzles: The Concept, Tools, and First Observations,” Journal of Information Technology Education: Innovations in Practice 10 (2011): 119–32, doi:10.28945/1394.

  189. Paulo Blikstein et al., “Programming Pluralism: Using Learning Analytics to Detect Patterns in the Learning of Computer Programming,” Journal of the Learning Sciences 23, no. 4 (October 2014): 561–99, doi:10.1080/10508406.2014.954750.

  190. James C. Spohrer, Elliot Soloway, and Edgar Pope, “A Goal/Plan Analysis of Buggy Pascal Programs,” Human-Computer Interaction 1, no. 2 (June 1985): 163–207, doi:10.1207/s15327051hci0102_4.

  191. Jean Stockard et al., “The Effectiveness of Direct Instruction Curricula: A Meta-Analysis of a Half Century of Research,” Review of Educational Research, January 2018, 003465431775191, doi:10.3102/0034654317751919.

  192. Green, Building a Better Teacher.

  193. Sally Fincher and Josh Tenenberg, “Warren’s Question,” in 2007 International Computing Education Research Conference (ICER’07) (Association for Computing Machinery (ACM), 2007), doi:10.1145/1288580.1288588; Sally Fincher et al., “Stories of Change: How Educators Change Their Practice,” in 2012 Frontiers in Education Conference (FIE’12) (Institute of Electrical; Electronics Engineers (IEEE), 2012), doi:10.1109/fie.2012.6462317.

  194. Lecia Barker, Christopher Lynnly Hovey, and Jane Gruning, “What Influences CS Faculty to Adopt Teaching Practices?” in 2015 Technical Symposium on Computer Science Education (SIGCSE’15) (Association for Computing Machinery (ACM), 2015), doi:10.1145/2676723.2677282.

  195. For a while, I was so worried about playing in tune that I completely lost my sense of timing.

  196. Cara Gormally, Mara Evans, and Peggy Brickman, “Feedback About Teaching in Higher Ed: Neglected Opportunities to Promote Change,” Cell Biology Education 13, no. 2 (June 2014): 187–99, doi:10.1187/cbe.13-12-0235.

  197. Atul Gawande, “Personal Best,” The New Yorker, October 3, 2011.

  198. Donald A. Schön, The Reflective Practitioner: How Professionals Think in Action (Basic Books, 1984).

  199. Benjamin S. Bloom, “The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring,” Educational Researcher 13, no. 6 (June 1984): 4–16, doi:10.3102/0013189x013006004.

  200. Anu Partanen, “What Americans Keep Ignoring About Finland’s School Success” (https://www.theatlantic.com/national/archive/2011/12/what-americans-keep-ignoring-about-finlands-school-success/250564/, 2011).

  201. Valerie Aurora and Mary Gardiner, How to Respond to Code of Conduct Reports, Version 1.1 (Frame Shift Consulting LLC, 2019).

  202. Eric Mazur, Peer Instruction: A User’s Manual (Prentice-Hall, 1996).

  203. Catherine H. Crouch and Eric Mazur, “Peer Instruction: Ten Years of Experience and Results,” American Journal of Physics 69, no. 9 (September 2001): 970–77, doi:10.1119/1.1374249; Leo Porter et al., “Success in Introductory Programming: What Works?” Communications of the ACM 56, no. 8 (August 2013): 34, doi:10.1145/2492007.2492020.

  204. Leo Porter et al., “A Multi-Institutional Study of Peer Instruction in Introductory Computing,” in 2016 Technical Symposium on Computer Science Education (SIGCSE’16) (Association for Computing Machinery (ACM), 2016), doi:10.1145/2839509.2844642.

  205. Michelle K. Smith et al., “Why Peer Discussion Improves Student Performance on in-Class Concept Questions,” Science 323, no. 5910 (January 2009): 122–24, doi:10.1126/science.1165919.

  206. Marilyn Friend and Lynne Cook, Interactions: Collaboration Skills for School Professionals, Eighth (Pearson, 2016).

  207. Justin Kruger and David Dunning, “Unskilled and Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-Assessments,” Journal of Personality and Social Psychology 77, no. 6 (1999): 1121–34, doi:10.1037/0022-3514.77.6.1121.

  208. Jane Margolis et al., Stuck in the Shallow End: Education, Race, and Computing (MIT Press, 2010).

  209. Partanen, “What Americans Keep Ignoring About Finland’s School Success.”

  210. Jo Erskine Hannay et al., “The Effectiveness of Pair Programming: A Meta-Analysis,” Information and Software Technology 51, no. 7 (July 2009): 1110–22, doi:10.1016/j.infsof.2009.02.001.

  211. Charlie McDowell et al., “Pair Programming Improves Student Retention, Confidence, and Program Quality,” Communications of the ACM 49, no. 8 (August 2006): 90–95, doi:10.1145/1145287.1145293; Brian Hanks et al., “Pair Programming in Education: A Literature Review,” Computer Science Education 21, no. 2 (June 2011): 135–73, doi:10.1080/08993408.2011.579808; Porter et al., “Success in Introductory Programming”; Mehmet Celepkolu and Kristy Elizabeth Boyer, “Thematic Analysis of Students’ Reflections on Pair Programming in CS1,” in 2018 Technical Symposium on Computer Science Education (SIGCSE’18) (Association for Computing Machinery (ACM), 2018), doi:10.1145/3159450.3159516.

  212. Colleen M. Lewis and Niral Shah, “How Equity and Inequity Can Emerge in Pair Programming,” in 2015 International Computing Education Research Conference (ICER’15) (Association for Computing Machinery (ACM), 2015), doi:10.1145/2787622.2787716.

  213. Jo Erskine Hannay et al., “Effects of Personality on Pair Programming,” IEEE Transactions on Software Engineering 36, no. 1 (January 2010): 61–80, doi:10.1109/tse.2009.41.

  214. Thorbjorn Walle and Jo Erskine Hannay, “Personality and the Nature of Collaboration in Pair Programming,” in 2009 International Symposium on Empirical Software Engineering and Measurement (ESER’09) (Institute of Electrical; Electronics Engineers (IEEE), 2009), doi:10.1109/esem.2009.5315996.

  215. Edwin G. Aiken, Gary S. Thomas, and William A. Shennum, “Memory for a Lecture: Effects of Notes, Lecture Rate, and Informational Density,” Journal of Educational Psychology 67, no. 3 (1975): 439–44, doi:10.1037/h0076613; Mark Bohay et al., “Note Taking, Review, Memory, and Comprehension,” American Journal of Psychology 124, no. 1 (2011): 63, doi:10.5406/amerjpsyc.124.1.0063.

  216. Harold N. Orndorff III, “Collaborative Note-Taking: The Impact of Cloud Computing on Classroom Performance,” International Journal of Teaching and Learning in Higher Education 27, no. 3 (2015): 340–51; Yu-Fen Yang and Yuan-Yu Lin, “Online Collaborative Note-Taking Strategies to Foster EFL Beginners’ Literacy Development,” System 52 (August 2015): 127–38, doi:10.1016/j.system.2015.05.006.

  217. Pam A. Mueller and Daniel M. Oppenheimer, “The Pen Is Mightier Than the Keyboard,” Psychological Science 25, no. 6 (April 2014): 1159–68, doi:10.1177/0956797614524581.

  218. Kayla Morehead, John Dunlosky, and Katherine A. Rawson, “How Much Mightier Is the Pen Than the Keyboard for Note-Taking? A Replication and Extension of Mueller and Oppenheimer (2014),” Educational Psychology Review, February 2019, doi:10.1007/s10648-019-09468-2.

  219. James Ward, Adventures in Stationery: A Journey Through Your Pencil Case (Profile Books, 2015).

  220. A colleague once told me that the basic unit of teaching is the bladder. When I said I’d never thought of that, she said, “You’ve obviously never been pregnant.”

  221. Jennifer Alvidrez and Rhona S. Weinstein, “Early Teacher Perceptions and Later Student Academic Achievement,” Journal of Educational Psychology 91, no. 4 (1999): 731–46, doi:10.1037/0022-0663.91.4.731; Lee Jussim and Kent D. Harber, “Teacher Expectations and Self-Fulfilling Prophecies: Knowns and Unknowns, Resolved and Unresolved Controversies,” Personality and Social Psychology Review 9, no. 2 (May 2005): 131–55, doi:10.1207/s15327957pspr0902_3.

  222. “And Linux!” someone shouts from the back of the room.

  223. Kelly Miller et al., “Role of Physics Lecture Demonstrations in Conceptual Learning,” Physical Review Special Topics - Physics Education Research 9, no. 2 (September 2013), doi:10.1103/physrevstper.9.020113.

  224. Benjamin L. Smarr and Aaron E. Schirmer, “3.4 Million Real-World Learning Management System Logins Reveal the Majority of Students Experience Social Jet Lag Correlated with Decreased Performance,” Scientific Reports 8, no. 1 (March 2018), doi:10.1038/s41598-018-23044-8.

  225. Fink, Creating Significant Learning Experiences.

  226. Donald L. Kirkpatrick, Evaluating Training Programs: The Four Levels (Berrett-Koehle, 1994).

  227. Raymond J. Wlodkowski and Margery B. Ginsberg, Enhancing Adult Motivation to Learn: A Comprehensive Guide for Teaching All Adults (Jossey-Bass, 2017).

  228. Miller, Minds Online.

  229. James M. Lang, Cheating Lessons: Learning from Academic Dishonesty (Harvard University Press, 2013).

  230. Martin V. Covington, Linda M. von Hoene, and Dominic J. Voge, Life Beyond Grades: Designing College Courses to Promote Intrinsic Motivation (Cambridge University Press, 2017).

  231. Biggs and Tang, Teaching for Quality Learning at University.

  232. Susan A. Ambrose et al., How Learning Works: Seven Research-Based Principles for Smart Teaching (Jossey-Bass, 2010).

  233. Miller, Minds Online.

  234. Lecia Barker, Christopher Lynnly Hovey, and Leisa D. Thompson, “Results of a Large-Scale, Multi-Institutional Study of Undergraduate Retention in Computing,” in 2014 Frontiers in Education Conference (FIE’14) (Institute of Electrical; Electronics Engineers (IEEE), 2014), doi:10.1109/fie.2014.7044267.

  235. Carl Hendrick and Robin Macpherson, What Does This Look Like in the Classroom?: Bridging the Gap Between Research and Practice (John Catt Educational, 2017).

  236. Mark Guzdial, “Exploring Hypotheses About Media Computation,” in 2013 International Computing Education Research Conference (ICER’13) (Association for Computing Machinery (ACM), 2013), doi:10.1145/2493394.2493397.

  237. Cynthia Bailey Lee, “Experience Report: CS1 in MATLAB for Non-Majors, with Media Computation and Peer Instruction,” in 2013 Technical Symposium on Computer Science Education (SIGCSE’13) (Association for Computing Machinery (ACM), 2013), doi:10.1145/2445196.2445214.

  238. Sarah Dahlby Albright, Titus H. Klinge, and Samuel A. Rebelsky, “A Functional Approach to Data Science in CS1,” in 2018 Technical Symposium on Computer Science Education (SIGCSE’18) (Association for Computing Machinery (ACM), 2018), doi:10.1145/3159450.3159550; Mark Meysenburg et al., “DIVAS: Outreach to the Natural Sciences Through Image Processing,” in 2018 Technical Symposium on Computer Science Education (SIGCSE’18) (Association for Computing Machinery (ACM), 2018), doi:10.1145/3159450.3159537; Anna Ritz, “Programming the Central Dogma: An Integrated Unit on Computer Science and Molecular Biology Concepts,” in 2018 Technical Symposium on Computer Science Education (SIGCSE’18) (Association for Computing Machinery (ACM), 2018), doi:10.1145/3159450.3159590.

  239. Sapna Cheryan et al., “Ambient Belonging: How Stereotypical Cues Impact Gender Participation in Computer Science,” Journal of Personality and Social Psychology 97, no. 6 (2009): 1045–60, doi:10.1037/a0016239.

  240. Danielle Gaucher, Justin Friesen, and Aaron C. Kay, “Evidence That Gendered Wording in Job Advertisements Exists and Sustains Gender Inequality,” Journal of Personality and Social Psychology 101, no. 1 (2011): 109–28, doi:10.1037/a0022530.

  241. Richard Wilkinson and Kate Pickett, The Spirit Level: Why Greater Equality Makes Societies Stronger (Bloomsbury Press, 2011).

  242. Patitsas et al., “Evidence That Computer Science Grades Are Not Bimodal.”

  243. Jere E. Brophy, “Research on the Self-Fulfilling Prophecy and Teacher Expectations,” Journal of Educational Psychology 75, no. 5 (1983): 631–61, doi:10.1037/0022-0663.75.5.631.

  244. Patitsas et al., “Evidence That Computer Science Grades Are Not Bimodal.”

  245. Denae Ford et al., “Paradise Unplugged: Identifying Barriers for Female Participation on Stack Overflow,” in 2016 International Symposium on Foundations of Software Engineering (FSE’16) (Association for Computing Machinery (ACM), 2016), doi:10.1145/2950290.2950331.

  246. Maria Svedin and Olle Bälter, “Gender Neutrality Improved Completion Rate for All,” Computer Science Education 26, nos. 2-3 (July 2016): 192–207, doi:10.1080/08993408.2016.1231469.

  247. Manu Kapur, “Examining Productive Failure, Productive Success, Unproductive Failure, and Unproductive Success in Learning,” Educational Psychologist 51, no. 2 (April 2016): 289–99, doi:10.1080/00461520.2016.1155457.

  248. Wilcox and Lionelle, “Quantifying the Benefits of Prior Programming Experience in an Introductory Computer Science Course.”

  249. Victoria F. Sisk et al., “To What Extent and Under Which Circumstances Are Growth Mind-Sets Important to Academic Achievement? Two Meta-Analyses,” Psychological Science, March 2018, 095679761773970, doi:10.1177/0956797617739704.

  250. Claude M. Steele, Whistling Vivaldi: How Stereotypes Affect Us and What We Can Do (W. W. Norton & Company, 2011).

  251. Jenessa R. Shapiro and Steven L. Neuberg, “From Stereotype Threat to Stereotype Threats: Implications of a Multi-Threat Framework for Causes, Moderators, Mediators, Consequences, and Interventions,” Personality and Social Psychology Review 11, no. 2 (May 2007): 107–30, doi:10.1177/1088868306294790.

  252. Norman Coombs, Making Online Teaching Accessible (Jossey-Bass, 2012); Sheryl E. Burgstahler, Universal Design in Higher Education: From Principles to Practice, Second (Harvard Education Press, 2015).

  253. Cynthia Bailey Lee, “What Can I Do Today to Create a More Inclusive Community in CS?” (http://bit.ly/2oynmSH, 2017).

  254. Betsy DiSalvo et al., “Saving Face While Geeking Out: Video Game Testing as a Justification for Learning Computer Science,” Journal of the Learning Sciences 23, no. 3 (July 2014): 272–315, doi:10.1080/10508406.2014.893434.

  255. Michael Lachney, “Computational Communities: African-American Cultural Capital in Computer Science Education,” Computer Science Education, February 2018, 1–22, doi:10.1080/08993408.2018.1429062.

  256. Eric Roberts, “Assessing and Responding to the Growth of Computer Science Undergraduate Enrollments: Annotated Findings” (http://cs.stanford.edu/people/eroberts/ResourcesForTheCSCapacityCrisis/files/AnnotatedFindings.pptx, 2017).

  257. Vashti Galpin, “Women in Computing Around the World,” ACM SIGCSE Bulletin 34, no. 2 (June 2002), doi:10.1145/543812.543839; Roli Varma and Deepak Kapur, “Decoding Femininity in Computer Science in India,” Communications of the ACM 58, no. 5 (April 2015): 56–62, doi:10.1145/2663339.

  258. Jane Margolis and Allan Fisher, Unlocking the Clubhouse: Women in Computing (MIT Press, 2003).

  259. Roberts, “Assessing and Responding to the Growth of Computer Science Undergraduate Enrollments.”

  260. David I. Miller and Jonathan Wai, “The Bachelor’s to Ph.d. STEM Pipeline No Longer Leaks More Women Than Men: A 30-Year Analysis,” Frontiers in Psychology 6 (February 2015), doi:10.3389/fpsyg.2015.00037.

  261. Janet Abbate, Recoding Gender: Women’s Changing Participation in Computing (MIT Press, 2012).

  262. Nathan L. Ensmenger, “Letting the ‘Computer Boys’ Take over: Technology and the Politics of Organizational Transformation,” International Review of Social History 48, no. S11 (December 2003): 153–80, doi:10.1017/s0020859003001305; Nathan L. Ensmenger, The Computer Boys Take over: Computers, Programmers, and the Politics of Technical Expertise (MIT Press, 2012).

  263. Marie Hicks, Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing (MIT Press, 2018).

  264. Kate M. Miltner, “Girls Who Coded: Gender in Twentieth Century U.K. And U.S. Computing,” Science, Technology, & Human Values, May 2018, doi:10.1177/0162243918770287.

  265. Lee, “What Can I Do Today to Create a More Inclusive Community in CS?”

  266. Dennis Littky, The Big Picture: Education Is Everyone’s Business (Association for Supervision & Curriculum Development (ASCD), 2004).

  267. Quintin Cutts et al., “Early Developmental Activities and Computing Proficiency,” in 2017 Conference on Innovation and Technology in Computer Science Education (ITiCSE’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3174781.3174789.

  268. Center for Community Organizations, “The ‘Problem’ Woman of Colour in the Workplace” (https://coco-net.org/problem-woman-colour-nonprofit-organizations/, 2018).

  269. Watters, The Monsters of Education Technology.

  270. Robert Ubell, “How the Pioneers of the MOOC Got It Wrong” (http://spectrum.ieee.org/tech-talk/at-work/education/how-the-pioneers-of-the-mooc-got-it-wrong, 2017).

  271. Anoush Margaryan, Manuela Bianco, and Allison Littlejohn, “Instructional Quality of Massive Open Online Courses (MOOCs),” Computers & Education 80 (January 2015): 77–83, doi:10.1016/j.compedu.2014.08.005.

  272. Ada S. Kim and Amy J. Ko, “A Pedagogical Analysis of Online Coding Tutorials,” in 2017 Technical Symposium on Computer Science Education (SIGCSE’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3017680.3017728.

  273. Raj Chetty, John N. Friedman, and Jonah E. Rockoff, “Measuring the Impacts of Teachers II: Teacher Value-Added and Student Outcomes in Adulthood,” American Economic Review 104, no. 9 (September 2014): 2633–79, doi:10.1257/aer.104.9.2633.

  274. McMillan Cottom, Lower Ed.

  275. John D. Hansen and Justin Reich, “Democratizing Education? Examining Access and Usage Patterns in Massive Open Online Courses,” Science 350, no. 6265 (December 2015): 1245–8, doi:10.1126/science.aab3782.

  276. Margaryan, Bianco, and Littlejohn, “Instructional Quality of Massive Open Online Courses (MOOCs)”; Miller, Minds Online; Linda B. Nilson and Ludwika A. Goodson, Online Teaching at Its Best: Merging Instructional Design with Teaching and Learning Research (Jossey-Bass, 2017).

  277. Robert E. Kraut and Paul Resnick, Building Successful Online Communities: Evidence-Based Social Design (MIT Press, 2016).

  278. Karl Fogel, Producing Open Source Software: How to Run a Successful Free Software Project (O’Reilly Media, 2005).

  279. Victoria Beck, “Testing a Model to Predict Online Cheating—Much Ado About Nothing,” Active Learning in Higher Education 15, no. 1 (January 2014): 65–75, doi:10.1177/1469787413514646.

  280. Lang, Cheating Lessons.

  281. Stockard et al., “The Effectiveness of Direct Instruction Curricula.”

  282. Kenneth R. Koedinger et al., “Learning Is Not a Spectator Sport: Doing Is Better Than Watching for Learning from a Mooc,” in 2015 Conference on Learning @ Scale (L@S’15) (Association for Computing Machinery (ACM), 2015), doi:10.1145/2724660.2724681.

  283. Nicholas Chen and Maurice Rabb, “A Pattern Language for Screencasting,” in 2009 Conference on Pattern Languages of Programs (PLoP’09) (Association for Computing Machinery (ACM), 2009), doi:10.1145/1943226.1943234.

  284. Ibid.

  285. Philip J. Guo, Juho Kim, and Rob Rubin, “How Video Production Affects Student Engagement,” in 2014 Conference on Learning @ Scale (L@S’14) (Association for Computing Machinery (ACM), 2014), doi:10.1145/2556325.2566239.

  286. Ibid.

  287. Stark and Freishtat, “An Evaluation of Course Evaluations”; Uttl, White, and Gonzalez, “Meta-Analysis of Faculty’s Teaching Effectiveness.”

  288. Guo, Kim, and Rubin, “How Video Production Affects Student Engagement.”

  289. Muller et al., “Saying the Wrong Thing.”

  290. Alison King, “From Sage on the Stage to Guide on the Side,” College Teaching 41, no. 1 (January 1993): 30–35, doi:10.1080/87567555.1993.9926781.

  291. Jennifer Campbell, Diane Horton, and Michelle Craig, “Factors for Success in Online CS1,” in 2016 Conference on Innovation and Technology in Computer Science Education (ITiCSE’16) (Association for Computing Machinery (ACM), 2016), doi:10.1145/2899415.2899457.

  292. Emily Nordmann et al., “Turn up, Tune in, Don’T Drop Out: The Relationship Between Lecture Attendance, Use of Lecture Recordings, and Achievement at Different Levels of Study” (https://psyarxiv.com/fd3yj, 2017), doi:10.17605/OSF.IO/FD3YJ.

  293. Graham Nuthall, The Hidden Lives of Learners (NZCER Press, 2007).

  294. Miller, Minds Online.

  295. Mickey Vellukunnel et al., “Deconstructing the Discussion Forum: Student Questions and Computer Science Learning,” in 2017 Technical Symposium on Computer Science Education (SIGCSE’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3017680.3017745.

  296. Ned Gulley, “In Praise of Tweaking,” Interactions 11, no. 3 (May 2004): 18, doi:10.1145/986253.986264.

  297. Lina Battestilli, Apeksha Awasthi, and Yingjun Cao, “Two-Stage Programming Projects: Individual Work Followed by Peer Collaboration,” in 2018 Technical Symposium on Computer Science Education (SIGCSE’18) (Association for Computing Machinery (ACM), 2018), doi:10.1145/3159450.3159486.

  298. Paré and Joordens, “Peering into Large Lectures.”

  299. Kulkarni et al., “Peer and Self Assessment in Massive Online Classes.”

  300. Raj et al., “Role of Live-Coding in Learning Introductory Programming”; Haaranen, “Programming as a Performance - Live-Streaming and Its Implications for Computer Science Education.”

  301. Wijnand A. IJsselsteijn et al., “Presence: Concept, Determinants, and Measurement,” in 2000 Conference on Human Vision and Electronic Imaging, ed. Bernice E. Rogowitz and Thrasyvoulos N. Pappas (SPIE, 2000), doi:10.1117/12.387188.

  302. Debzani Deb et al., “MRS: Automated Assessment of Interactive Classroom Exercises,” in 2018 Technical Symposium on Computer Science Education (SIGCSE’18) (Association for Computing Machinery (ACM), 2018), doi:10.1145/3159450.3159607.

  303. Brookfield and Preskill, The Discussion Book.

  304. Alicia Iriberri and Gondy Leroy, “A Life-Cycle Perspective on Online Community Success,” ACM Computing Surveys 41, no. 2 (February 2009): 1–29, doi:10.1145/1459352.1459356.

  305. Kate Sanders et al., “The Canterbury QuestionBank: Building a Repository of Multiple-Choice CS1 and CS2 Questions,” in 2013 Conference on Innovation and Technology in Computer Science Education (ITiCSE’13) (Association for Computing Machinery (ACM), 2013), doi:10.1145/2543882.2543885.

  306. Parsons and Haden, “Parson’s Programming Puzzles”; Barbara J. Ericson et al., “Usability and Usage of Interactive Features in an Online Ebook for CS Teachers,” in 2015 Workshop in Primary and Secondary Computing Education (Wipsce’15) (Association for Computing Machinery (ACM), 2015), 111–20, doi:10.1145/2818314.2818335; Morrison et al., “Subgoals Help Students Solve Parsons Problems”; Ericson, Margulieux, and Rick, “Solving Parsons Problems Versus Fixing and Writing Code.”

  307. Ihantola and Karavirta, “Two-Dimensional Parson’s Puzzles.”

  308. Kyle James Harms, Jason Chen, and Caitlin L. Kelleher, “Distractors in Parsons Problems Decrease Learning Efficiency for Young Novice Programmers,” in 2016 International Computing Education Research Conference (ICER’16) (Association for Computing Machinery (ACM), 2016), doi:10.1145/2960310.2960314.

  309. Michal Armoni and David Ginat, “Reversing: A Fundamental Idea in Computer Science,” Computer Science Education 18, no. 3 (September 2008): 213–30, doi:10.1080/08993400802332670.

  310. Jack Hollingsworth, “Automatic Graders for Programming Classes,” Communications of the ACM 3, no. 10 (October 1960): 528–29, doi:10.1145/367415.367422.

  311. Christopher Douce, David Livingstone, and James Orwell, “Automatic Test-Based Assessment of Programming,” Journal on Educational Resources in Computing 5, no. 3 (September 2005), doi:10.1145/1163405.1163409; Petri Ihantola et al., “Review of Recent Systems for Automatic Assessment of Programming Assignments,” in 2010 Koli Calling Conference on Computing Education Research (Koli’10) (Association for Computing Machinery (ACM), 2010), doi:10.1145/1930464.1930480.

  312. Stephen H. Edwards, Zalia Shams, and Craig Estep, “Adaptively Identifying Non-Terminating Code When Testing Student Programs,” in 2014 Technical Symposium on Computer Science Education (SIGCSE’14) (Association for Computing Machinery (ACM), 2014), doi:10.1145/2538862.2538926.

  313. Phil Maguire, Rebecca Maguire, and Robert Kelly, “Using Automatic Machine Assessment to Teach Computer Programming,” Computer Science Education, February 2018, 1–18, doi:10.1080/08993408.2018.1435113.

  314. Manuel Rubio-Sánchez et al., “Student Perception and Usage of an Automated Programming Assessment Tool,” Computers in Human Behavior 31 (February 2014): 453–60, doi:10.1016/j.chb.2013.04.001.

  315. Sumukh Sridhara et al., “Fuzz Testing Projects in Massive Courses,” in 2016 Conference on Learning @ Scale (L@S’16) (Association for Computing Machinery (ACM), 2016), doi:10.1145/2876034.2876050.

  316. Soumya Basu et al., “Problems Before Solutions: Automated Problem Clarification at Scale,” in 2015 Conference on Learning @ Scale (L@S’15) (Association for Computing Machinery (ACM), 2015), doi:10.1145/2724660.2724679.

  317. Stephen Nutbrown and Colin Higgins, “Static Analysis of Programming Exercises: Fairness, Usefulness and a Method for Application,” Computer Science Education 26, nos. 2-3 (May 2016): 104–28, doi:10.1080/08993408.2016.1179865.

  318. Hieke Keuning, Johan Jeuring, and Bastiaan Heeren, “Towards a Systematic Review of Automated Feedback Generation for Programming Exercises,” in 2016 Conference on Innovation and Technology in Computer Science Education (ITiCSE’16) (Association for Computing Machinery (ACM), 2016), doi:10.1145/2899415.2899422; Hieke Keuning, Johan Jeuring, and Bastiaan Heeren, “Towards a Systematic Review of Automated Feedback Generation for Programming Exercises - Extended Version” (Technical Report UU-CS-2016-001, Utrecht University, 2016).

  319. Kevin Buffardi and Stephen H. Edwards, “Reconsidering Automated Feedback: A Test-Driven Approach,” in 2015 Technical Symposium on Computer Science Education (SIGCSE’15) (Association for Computing Machinery (ACM), 2015), doi:10.1145/2676723.2677313.

  320. Ibid.

  321. Sridhara et al., “Fuzz Testing Projects in Massive Courses.”

  322. Martijn Stegeman, Erik Barendsen, and Sjaak Smetsers, “Rubric for Feedback on Code Quality in Programming Courses” (http://stgm.nl/quality, 2016).

  323. Andrew Luxton-Reilly, “A Systematic Review of Tools That Support Peer Assessment,” Computer Science Education 19, no. 4 (December 2009): 209–32, doi:10.1080/08993400903384844.

  324. Zack Butler, Ivona Bezakova, and Kimberly Fluet, “Pencil Puzzles for Introductory Computer Science,” in 2017 Technical Symposium on Computer Science Education (SIGCSE’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3017680.3017765.

  325. Pasi Sahlberg, Finnish Lessons 2.0: What Can the World Learn from Educational Change in Finland? (Teachers College Press, 2015); Wilkinson and Pickett, The Spirit Level.

  326. Etienne Wenger-Trayner and Beverly Wenger-Trayner, “Communities of Practice: A Brief Introduction” (http://wenger-trayner.com/intro-to-cops/, 2015).

  327. Las personas que prefieren lo último a menudo están solo interesadas en discutir

  328. Saul D. Alinsky, Rules for Radicals: A Practical Primer for Realistic Radicals (Vintage, 1989); George Lakey, How We Win: A Guide to Nonviolent Direct Action Campaigning (Melville House, 2018).

  329. Brown, Building Powerful Community Organizations; Midwest Academy, Organizing for Social Change: Midwest Academy Manual for Activists, Fourth (The Forum Press, 2010); Lakey, How We Win.

  330. Frank Adams and Myles Horton, Unearthing Seeds of Fire: The Idea of Highlander (Blair, 1975).

  331. Dan Spalding, How to Teach Adults: Plan Your Class, Teach Your Students, Change the World (Jossey-Bass, 2014).

  332. Dan Sholler et al., “Ten Simple Rules for Helping Newcomers Become Contributors to Open Source Projects” (https://github.com/gvwilson/10-newcomers/, 2019).

  333. Fogel, Producing Open Source Software.

  334. Vandana Singh, “Newcomer Integration and Learning in Technical Support Communities for Open Source Software,” in 2012 ACM International Conference on Supporting Group Work - GROUP’12 (ACM Press, 2012), doi:10.1145/2389176.2389186; Igor Steinmacher et al., “Why Do Newcomers Abandon Open Source Software Projects?” in 2013 International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE’13) (Institute of Electrical; Electronics Engineers (IEEE), 2013), doi:10.1109/chase.2013.6614728; Igor Steinmacher et al., “Almost There: A Study on Quasi-Contributors in Open-Source Software Projects,” in 2018 International Conference on Software Engineering (ICSE’18) (ACM Press, 2018), doi:10.1145/3180155.3180208.

  335. Barthélémy Dagenais et al., “Moving into a New Software Project Landscape,” in 2010 International Conference on Software Engineering (ICSE’10) (ACM Press, 2010), doi:10.1145/1806799.1806842.

  336. Igor Steinmacher et al., “Overcoming Open Source Project Entry Barriers with a Portal for Newcomers,” in 2016 International Conference on Software Engineering (ICSE’16) (ACM Press, 2016), doi:10.1145/2884781.2884806.

  337. Jo Freeman, “The Tyranny of Structurelessness,” The Second Wave 2, no. 1 (1972).

  338. Elinor Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action (Cambridge University Press, 2015).

  339. David Bollier, Think Like a Commoner: A Short Introduction to the Life of the Commons (New Society Publishers, 2014).

  340. Este es uno de los momentos en que vale la pena tener vínculos con el gobierno local u otras organizaciones afines.

  341. Alessandra Pigni, The Idealist’s Survival Kit: 75 Simple Ways to Prevent Burnout (Parallax Press, 2016).

  342. M. S. Hagger et al., “A Multilab Preregistered Replication of the Ego-Depletion Effect,” Perspectives on Psychological Science 11, no. 4 (2016): 546–73, doi:10.1177/1745691616652873.

  343. Brown, Building Powerful Community Organizations.

  344. Fogel, Producing Open Source Software.

  345. Greg Wilson, “Software Carpentry: Lessons Learned,” F1000Research, January 2016, doi:10.12688/f1000research.3-62.v2; Gabriel A. Devenyi et al., “Ten Simple Rules for Collaborative Lesson Development,” PLoS Computational Biology 14, no. 3 (March 2018), doi:10.1371/journal.pcbi.1005963.

  346. Marc J. Kuchner, Marketing for Scientists: How to Shine in Tough Times (Island Press, 2011).

  347. Viviane Schwarz, Welcome to Your Awesome Robot (Flying Eye Books, 2013).

  348. Betsy DiSalvo, Cecili Reid, and Parisa Khanipour Roshan, “They Can’t Find Us,” in 2014 Technical Symposium on Computer Science Education (SIGCSE’14) (Association for Computing Machinery (ACM), 2014), doi:10.1145/2538862.2538933.

  349. Y la prevalencia de mentalidades fijas en el profesorados a lo que se refiere a la enseñanza, es decir la creencia de que algunas personas son “solo mejores profesores”

  350. Barker, Hovey, and Gruning, “What Influences CS Faculty to Adopt Teaching Practices?”

  351. Stark and Freishtat, “An Evaluation of Course Evaluations”; Uttl, White, and Gonzalez, “Meta-Analysis of Faculty’s Teaching Effectiveness.”

  352. Mark S. Bauer et al., “An Introduction to Implementation Science for the Non-Specialist,” BMC Psychology 3, no. 1 (September 2015), doi:10.1186/s40359-015-0089-9.

  353. Aman Yadav et al., “Expanding Computer Science Education in Schools: Understanding Teacher Experiences and Challenges,” Computer Science Education 26, no. 4 (December 2016): 235–54, doi:10.1080/08993408.2016.1257418.

  354. Sathya Narayanan et al., “Upward Mobility for Underrepresented Students,” in 2018 Technical Symposium on Computer Science Education (SIGCSE’18) (Association for Computing Machinery (ACM), 2018), doi:10.1145/3159450.3159551.

  355. Helen H. Hu et al., “Building a Statewide Computer Science Teacher Pipeline,” in 2017 Technical Symposium on Computer Science Education (SIGCSE’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3017680.3017788.

  356. Maura Borrego and Charles Henderson, “Increasing the Use of Evidence-Based Teaching in STEM Higher Education: A Comparison of Eight Change Strategies,” Journal of Engineering Education 103, no. 2 (April 2014): 220–52, doi:10.1002/jee.20040.

  357. Charles Henderson et al., Designing Educational Innovations for Sustained Adoption (Increase the Impact, 2015); Charles Henderson et al., “Designing Educational Innovations for Sustained Adoption (Executive Summary)” (http://www.increasetheimpact.com/resources.html; Increase the Impact, 2015).

  358. McMillan Cottom, Lower Ed.

  359. Kyle Thayer and Amy J. Ko, “Barriers Faced by Coding Bootcamp Students,” in 2017 International Computing Education Research Conference (ICER’17) (Association for Computing Machinery (ACM), 2017), doi:10.1145/3105726.3106176.

  360. Quinn Burke et al., “Understanding the Software Development Industry’s Perspective on Coding Boot Camps Versus Traditional 4-Year Colleges,” in 2018 Technical Symposium on Computer Science Education (SIGCSE’18) (Association for Computing Machinery (ACM), 2018), doi:10.1145/3159450.3159485.

  361. Lang, Small Teaching.

  362. Manns and Rising, Fearless Change.

  363. Kuchner, Marketing for Scientists.

  364. April Y. Wang et al., “Mismatch of Expectations: How Modern Learning Resources Fail Conversational Programmers,” in 2018 Conference on Human Factors in Computing Systems (CHI’18) (Association for Computing Machinery (ACM), 2018), doi:10.1145/3173574.3174085.

  365. David F. Labaree, “The Winning Ways of a Losing Strategy: Educationalizing Social Problems in the United States,” Educational Theory 58, no. 4 (November 2008): 447–60, doi:10.1111/j.1741-5446.2008.00299.x.

  366. Eugene Farmer, “The Gatekeeper’s Guide, or How to Kill a Tool,” IEEE Software 23, no. 6 (November 2006): 12–13, doi:10.1109/ms.2006.174.

  367. Parsons and Haden, “Parson’s Programming Puzzles.”

  368. Fink, Creating Significant Learning Experiences.

  369. Brown, Building Powerful Community Organizations; Brookfield and Preskill, The Discussion Book; Steven G. Rogelberg, The Surprising Science of Meetings (Oxford University Press, 2018).

  370. Yo ciertamente lo hice cuando me hicieron esto

  371. Anne Minahan, “Martha’s Rules,” Affilia 1, no. 2 (June 1986): 53–56, doi:10.1177/088610998600100206.

  372. Esther Derby and Diana Larsen, Agile Retrospectives: Making Good Teams Great (Pragmatic Bookshelf, 2006).

  373. Atul Gawande, “The Checklist,” The New Yorker, December 10, 2007.

  374. Emma-Louise Aveling, Peter McCulloch, and Mary Dixon-Woods, “A Qualitative Study Comparing Experiences of the Surgical Safety Checklist in Hospitals in High-Income and Low-Income Countries,” BMJ Open 3, no. 8 (August 2013), doi:10.1136/bmjopen-2013-003039; David R. Urbach et al., “Introduction of Surgical Safety Checklists in Ontario, Canada,” New England Journal of Medicine 370, no. 11 (March 2014): 1029–38, doi:10.1056/nejmsa1308261; G. Ramsay et al., “Reducing Surgical Mortality in Scotland by Use of the WHO Surgical Safety Checklist,” BJS, April 2019, doi:10.1002/bjs.11151.

  375. Wiggins and McTighe, Understanding by Design; Biggs and Tang, Teaching for Quality Learning at University; Fink, Creating Significant Learning Experiences.