Collaborative AI Learning Environments: Show Me What I'm Saying
Open-ended responses require synthesis and are uncomfortable territory for students early in the learning process.
Humans are quite good at synthesis. However, sometimes we have trouble retrieving the right information at the right time, which may prevent us from responding as thoroughly as we are capable of responding.
What are some ways we can fight this retrieval barrier? Perhaps with a collaborative agent. In a 2016 study, Eugster et. al. were able to create a recommendation engine based on neural activity. This kind of observational recommendation has existed as long as search keywords have been around (and even before).
But what about in creative learning environments, where the goal is not merely interest and identifying well-worn paths from the current location? Furthermore, how might we avoid surfacing information that the learner is tasked with retrieving?
Setting up these guidelines is critical, as we know that learning depends on intervals of information retrieval and synthesis. So the strategy shouldn't lean into revelation territory.
Of course, context must be taken into account. ELIZA was built specifically as a parody, and few people had been exposed to any sort of interactive computation of the sort at that point in history, so relative to the backdrop, it probably wasn't extremely difficult to convince people that the computer was quite intelligent.
However, ELIZA (and many examples thereafter) outline the power of a relatively static collaborator in opening up a person's thinking.
As a programmer, I use "duck debugging" on a regular basis - a process of forced, methodical articulation. When given a static collaborator (the duck), I am able to direct my attention to employing the method more completely.
One way might be to create an intelligent collaborative reflection of what the learner is saying.
This might require a "processing" section - a scratch pad where a user might draw or type their thoughts as they process through them. With the encouragement that this scratchpad area is not being scrutinized, a student might use this area to explore information they are able to retrieve, visualize how the pieces connect, and formulate their ultimate synthesis that will end up in the "official" answer.
So how does our intelligent collaborative agent act?
Some ideas:
- Use some basic NLP to identify object tokens, and visualize those in a parallel screen. For example, "three dots" might show three dots of equal size in the parallel screen. The learner might be able to manipulate placement using text or direct input. This makes the text more than a human readable string; instead, it connects the words to a representational model.
- Provide a common vocabulary for generating visualizations or drawing relevance across a scratched narrative. For example, when showing a linear equation, provide the visual representation of that equation with a simple written syntax. Allow named variables so abstraction happens in the formalization step.
Of course, returning to our previous discussion, we want to avoid revealing information to the learner that they are trying to instill in their own minds; therefore, it would be necessary to provide the guardrails for what types of information may be reflected in a given circumstance. If the student is learning about 18th century European history, it's probably not going to help their learning process if they can easily query "timeline of the Seven Years' War" and get a visual representation of that.
Inferring what a user needs to see based on some set of guides could make a learner's mental processing of information much more powerful, allowing them to draw synthesis with a higher degree of depth. Educators may do well to provide better tools to the learner for the process of synthesis. In the same way that pencil and paper may create an externalization of ideas, perhaps it is time for a maturation of learning tools to extend our ability to articulate raw information for analysis and synthesis.