Pick a number between one and ten...

...and the good (possibly mad) scientists at the Max Planck Institute, Oxford University, and UCL will tell you whether or not you'll be honest if I guess it correctly. According to a recent article in The Guardian, developments in brain scanning technology have led to the capability of detecting and correctly interpreting human intention. Theoretically, I could scan your brain while questioning you and know if you plan to lie, commit a crime, or vote Republican.

The research remains in its early stages, but it is moving along quickly enough to bring up the old scientific stumbling-block of ethics. What will the technology be used for? Whose hands are the wrong hands? Can it be used to find out how many more blades Gillette will add to their razors before the madness ends?

The article brings up your Big Brother, Minority Report scenarios, and suggests that scientists are well aware of them. I am no Luddite, and I believe that advances in technology have done more harm than good (the jury remains out on the internet), but I was particularly struck by the following snippet:
The use of brain scanners to judge whether people are likely to commit crimes is a contentious issue that society should tackle now, according to Prof Haynes. "We see the danger that this might become compulsory one day, but we have to be aware that if we prohibit it, we are also denying people who aren't going to commit any crime the possibility of proving their innocence."
It's probably safe to assume that Prof. Haynes didn't mean it quite the way it reads out of context. But it certainly sounds as though I will be somehow obliged to prove my innocence before I have done anything. One wonders precisely what I've done to get myself into that situation beyond looking shifty-eyed and making wisecracks like the one above about voting Republican. In any case, it seems that in the future, happily, those of us who don't intend to commit any crimes won't be denied the opportunity to prove it.

Here's another quotation that gave me pause:
"Do we want to become a 'Minority Report' society where we're preventing crimes that might not happen?," she [Barbara Sahakian, a professor of neuro-psychology at Cambridge University] asked. "For some of these techniques, it's just a matter of time. It is just another new technology that society has to come to terms with and use for the good, but we should discuss and debate it now because what we don't want is for it to leak into use in court willy nilly without people having thought about the consequences.
I find this less than reassuring. The quotation suggests that there are scientists, and then there is society; the former will explore the frontiers of science, the latter will have to deal with the consequences and make the choices. Scientists -- not that I know many of them -- don't seem to be much in the habit of letting go once they've caught on to something. It's just a matter of time. It will be society's job, though, to use the technology for good instead of evil. Not to be an alarmist, but when has a technology that had the potential to do harm been developed and then done no harm? It looks to me like we're in for yet another of those uncomfortable trade-offs.

I bring this up because I have recently had my first face-to-face encounters with proper scientists in a graduate seminar I am auditing on Aesthetics and Cognitive Science. In our first session, someone brought up the possibility of determining a "minimum set of conditions" necessary to evoke an aesthetic response or experience. After we have worked our way through Burke, Hogarth, Kant, et al., we will move into neural science and psychophysics. The people in the room don't appear to me to be any of your MKUltra, Manchurian Candidate, Orwellian overlord types, but as of yet there has been no discussion about the consequences of discovery. Imagine what happens if we managed to define the set of conditions. Do we get art, or poetry, or novels, by algorithm? Computer authors? To what will the lowest common denominator reduce us all? How soon before the GOP and DNC latch on and start bringing out a more perfectly manipulative political ad? What will Gillette do with the knowledge?

The quest to convince (or manipulate) audiences has been the subject of philosophic and scientific discourse at least since the days of Longinus. Thus far, though, the quest has remained more philosophy than science; trial and error, precedent, data collected after the fact. Ad campaigns for people and products still fail. Those with the power and will to do "evil" have not had the benefit of a complete defined set of procedures based on the hard-wiring of the human brain. I am not necessarily suggesting that the course I am now taking will lead there. I am ignorant enough about psychophysics and cognitive science to speculate wildly.

Am I sitting in a room full of future Oppenheimers? What draws the line between proceeding cautiously and choosing not to proceed at all?

1 comment:

Marina said...

don't sell yourself short dear.
You're only marginally shifty-eyed, at most.