Does it matter if we can’t distinguish between human and computer-generated dreams?
A new generation of AI tools can be trained on a set of texts, then take a prompt and create new texts that are semantically similar to the original texts. As researchers experiment with different applications of this technology, the question naturally arises as to what would happen if these tools were trained with a collection of dream reports. If an AI system then produced a new set of texts, what would be the significance of these computer-generated “dreams”? Is there any value to pursuing this line of inquiry, either for AI development or for dream research?
Yes, there are some potential benefits here for the study of dreams and for efforts to improve the AI systems. However, we will first need to overcome the negative effects of an inevitable but ultimately dead-end experiment. This experiment will present a collection of texts to a panel of dream experts and ask them to identify the human vs computer-generated dreams. The almost-certain result will be that the experts cannot reliably distinguish between real and fake dream reports.
What would this mean? One might easily conclude from such an experiment that dream research as a whole is untrustworthy and self-deluded in its claims. The results would seem to demonstrate a fundamental lack of objectivity in the study of dreams.
This may seem sensible, but it reflects a lack of familiarity with actual dream research. For most researchers, the question of how to distinguish between real and fake dream reports is not something they worry about. Why not? The reason is simple: The virtually infinite creativity of dreams means that there is NO linguistic marker (other than what the dreamer may indicate) that can absolutely and consistently distinguish a genuine dream report from another kind of text. Even if you have analyzed a long and large collection of texts from the past, that does not prevent future dreams from taking unprecedented, unpredictable new forms.
Indeed, many researchers lean into the idea that there are no boundaries to what forms dreaming can take. They approach extremely unusual and bizarre types of dreams, what Jung referred to as “big dreams,” not with skepticism about their legitimacy, but with special interest in their potential creativity and symbolic complexity. Moreover, psychoanalysts generally don’t care about this issue, either, because from a Freudian perspective it does not matter if you made up a dream—your “fake” dream still reveals your unconscious conflicts, just as your “real” dreams do.
Because this question of real vs. fake dreams is something that researchers themselves generally do not believe has pragmatic relevance for their work, such an experiment would be more of a gimmick than anything else. It would reveal nothing of significance for the study of dreams and might cast an unfair shadow of doubt on the credibility of those who work in the field.
So is there any positive use for this technology? Yes, several possibilities beckon. One such use could be termed a “personalized dream extender.” If an AI were trained on a set of an individual’s dreams and were then prompted to generate a “new” dream, the result might provide the individual with some “aha!” insights. Perhaps there could be some therapeutic applications of this, too, by giving the individual a more expansive sense of the potentials of their own imaginations as mirrored back by the AI-generated dreams.
However, even this practice would require careful framing, to prevent people from assuming the AI system has authoritative knowledge of their dreaming selves. Our present-day cultural presumption of superiority for virtually any new technology could lead in this case to people losing faith in their own dreaming capacities when encountering AI-generated dreams, or, perhaps more ominously, unconsciously trying to mold their dreaming to become more aligned with what the AI is telling them they should be dreaming.
Another positive potential for this technology would involve an experiment that could be genuinely interesting in its results and would make good use of dream researcher expertise. The experiment would be this: Train several different AI systems on the same set of dreams, then have each of the systems generate its own set of new dreams. At this point, bring in a panel of dream experts to discern and identify the significant differences between the sets. The results could give insight into what makes each individual AI system different from the others and suggest ways to improve and refine their algorithms. More than that, the findings of such an experiment could reveal aspects of the “unconscious” of each system, highlighting its implicit values and subtle biases. This could contribute to the vital collective task of learning more about how these extremely powerful and increasingly widespread AI tools actually function in the world.
Note: this post first appeared in Psychology Today on July 26, 2022.