Ask A Genius (or Two):
Conversation with Dr. Claus Volko and Rick Rosner
Rick Rosner and I conduct a conversational series entitled Ask A Genius on a variety of subjects through In-Sight Publishing on the personal and professional website for Rick. Rick exists on the World Genius Directory listing as the world’s second highest IQ at 192 based on several ultra-high IQ tests scores developed by independent psychometricians. Dipl.-Ing Dr. Claus D. Volko, B.Sc., earned a score at 172, on the Equally Normed Numerical Derivation Tests (ENNDT) by Marco Ripà and Gaetano Morelli. Both scores on a standard deviation of 15. A sigma of ~6.13 for Rick – a general intelligence rarity of 1 in 2,314,980,850 – and 4.80 for Claus – a general intelligence rarity of 1 in 1,258,887. Of course, if a higher general intelligence score, then the greater the variability in, and margin of error in, the general intelligence scores because of the greater rarity in the population. This amounts to a joint interview or conversation with Dr. Claus Volko, Rick Rosner, and myself on the “The Nature of Intelligence.”
Written by Scott Douglas Jacobsen
Langley, British Columbia, Canada
Scott Douglas Jacobsen: Claus meet Rick. Rick meet Claus. The topic is “The Nature of Intelligence” for this discussion. Claus, you are a programmer, medical scientist, and expert in computational intelligence. That is, you have the relevant expertise. Therefore, it seems most appropriate to have the groundwork, e.g. common terms, premises (or assumptions), and theories within computational intelligence, provided by you. To begin, what are the common terms, premises (or assumptions), and theories within computational intelligence at the frontier of the discipline? From there, we can discuss the nature of intelligence within a firm context.
Dipl.-Ing. Dr. Claus D. Volko, B.Sc.: Hello Scott, hello Rick, I am happy to be around with you.
Computational intelligence is a subdiscipline of computer science that has the aim to enable computers to make autonomous decisions based on reasoning. So computers should ultimately display behavior which human beings would consider “intelligent”. The primary assumption of computational intelligence is that intelligent behavior can emerge from computation. Techniques scientists use in this subdiscipline include neural networks, machine learning, search algorithms, metaheuristics and evolutionary computation.
Nowadays a lot of computer scientists specialize in machine learning. It is a subdiscipline of computational intelligence in which the computer is trained to solve classification and regression problems on its own. There are three types, supervised learning, unsupervised learning and reinforcement learning. In supervised learning, the computer is given a training set, based on which it learns to classify data or compute a regression curve. After the training, the computer can classify new data of a similar kind on its own. In unsupervised learning, the computer tries to find ways to classify data by itself. One type of unsupervised learning is known as clustering: the computer is provided with data and has to come up with categories which subsets of this data can be assigned to. Finally, reinforcement learning is a type of machine learning in which the computer gets a “reward” for correct behavior and sees to it that this reward gets maximized. Nowadays you often bump into the buzzword “deep learning”; that is a superset of various variants of machine learning having in common that they employ neural networks. Deep learning techniques have recently yielded a lot of success, e.g. in gaming. For instance, the program AlphaGo which beat one of the best Go players of the world a couple of years ago employs deep learning.
In general, speech recognition, image recognition and natural language processing are considered real-world applications of machine learning. Machine learning algorithms are used for optical character recognition (to process handwritten texts), for controlling computers by voice (as it is already possible in Windows 10 using MS Cortana) and for automated translation (e.g. Google Translate).
Commonly used search algorithms include the Minimax algorithm and Alpha-beta pruning, which is an optimized variant of the former. These algorithms allow the computer to traverse through a search tree and decide which path to take in order to arrive at the optimal result as quickly as possible. Such algorithms are regularly used in computer games in order to make decisions how the computer-controlled opponents should act.
I personally specialized in metaheuristics and evolutionary computation in my studies. Metaheuristics is a programming paradigm for solving combinatorial optimization problems that comprises various algorithms which allow to speed up computation while not guaranteeing that the (globally) optimal solution is found. This is useful when working with computationally hard problems, such as NP-complete or non-polynomial problems, where it would take a lot of time to find the global optimum and where it would be acceptable to find a solution that is very good, although it is not the global optimum. Some examples of metaheuristics include variable neighborhood search, simulated annealing, tabu search, and branch-and-bound. In general they have the disadvantage that they sometimes get stuck in local optima, that is solutions that are better than all of their “neighbors” but still far from the global optimum. To overcome this obstacle, metaheuristics have built-in mechanisms to rapidly move away from local neighborhoods and try to find a better local optimum elsewhere.
Evolutionary computation is a variant of metaheuristics that is based on the idea of Darwinian selection. So it is a range of algorithms inspired by biology and mechanisms such as mutation. One interesting subtype of evolutionary computation is genetic programming, in which the computer creates new programs itself and selects the ones that seem to work best.
All of this is supposed to make the computer behave in an “intelligent” manner. And researchers working in this field are becoming increasingly successful: Some computer programs already achieve an average score in intelligence tests designed for human beings. And yet, the computer lacks one thing man has at his/her disposal: self-awareness. Computers may be able to think, but they are not aware of their doing so. That is why it is still ethical to turn off or throw away a computer, while of course it is not ethical to kill a human being.
Computational intelligence, just like human intelligence, relies heavily on logic, which is why lectures on formal logic, history of logic and non-classical logics make up a large part of the computational intelligence curriculum at university. A computer is excellent at computing logical conclusions from given premises, but it lacks the ability to come up with new ideas of its own. It can only draw conclusions from data that is given to it. Of course, it is debatable whether human beings are really different in this aspect. Perhaps it is also the norm for human beings to be only able to come up with new ideas by combining knowledge and experiences that have previously been acquired in a creative way.
Rick Rosner: The general question for Claus and me is the nature of intelligence and Claus has talked a lot about it because it is his field, which is computational intelligence. Claus, you talk about various forms of computational intelligence and AI. I just want to talk a little bit about – I think most people who don’t work in the field, like me, who think about AI they think about robot butlers or a robot girlfriend. Often, it is a human-type brain in a human type body. Or, at least, something you can talk to. (We did this interview many months ago, and I’ve taken a shamefully long time to go over my comments. But in that time, I think the public has become much more aware of machine learning. We may not understand it, but more and more we know it’s not just robot girlfriends.)
Then when people who work in the field of AI and machine learning talk about that stuff, I don’t think you mean fully conscious human thinking. I think you mean various forms of very powerful computation, which may or may not embrace an ability to improve performance through self-feedback or machine learning. I have a friend who says by the year 2100 there will be a trillion AIs in the world.
But that doesn’t mean a trillion robot butlers or girlfriends. He means a trillion machine intelligences of various types, with most of them engineered for specific functions and most without consciousness. Sophisticated computational devices will surround us. It’s been predicted that sidewalks will have chips in them to record pedestrian traffic to help city managers know how to deal with pavement durability and congestion issues, and who knows what else. But that doesn’t mean that the sidewalk will be conscious. It would be a sad life for a sidewalk chip that has to be conscious 24/7 of itself being a sidewalk.
A conscious sidewalk would be overkill. Though it wouldn’t be overkill to have sophisticated tallying technology in a sidewalk, especially in a future when such technology will be cheap.
When it comes to consciousness versus machine intelligence, I think what I believe about consciousness is closest to Minsky’s Society of Mind with massive feedback among the brain’s various subsystems. Today, machine learning and AI do not include the massive amount of shared information among expert subsystems that goes into having a fully fleshed consciousness. The option is not there yet. And even when it is, AI for most tasks will not require the massive and intricate information-sharing that constitutes consciousness. However, in the farther future, more than a century from now, information processing will be so powerful, ubiquitous, highly networked and flexible, that consciousness will not be considered as special as it is now. It could be something that is or is not present in parts of a system at a given time, depending on its immediate information-processing needs.
Volko: First, before answering Scott’s new questions, I would like to comment on Rick’s statement regarding consciousness.
I think that Rick is right in that artificial intelligence enables computers to make very complex computations, but that it does not make the machines conscious.
There has recently been an article about this matter in Singularity Hub (link). Quote from this article:
“Consciousness is ‘resolutely computational,’ the authors say, in that it results from specific types of information processing, made possible by the hardware of the brain. […] If consciousness results purely from the computations within our three-pound organ, then endowing machines with a similar quality is just a matter of translating biology to code. […] To Dehaene and colleagues, consciousness is a multilayered construct with two ‘dimensions:’ C1, the information readily in mind, and C2, the ability to obtain and monitor information about oneself. Both are essential to consciousness, but one can exist without the other. […] Would a machine endowed with C1 and C2 behave as if it were conscious? Very likely: a smartcar would ‘know’ that it’s seeing something, express confidence in it, report it to others, and find the best solutions for problems. If its self-monitoring mechanisms break down, it may also suffer ‘hallucinations’ or even experience visual illusions similar to humans.”
I personally tend to be highly skeptical about this statement. I doubt the basic assumption that “consciousness results purely from computations”.
It is not easy to explain what consciousness is. I can only speak for myself: I have a strong feeling that “I am something (or someone)”. I “hear” my own thoughts, I have the feeling that I can control them, as well as my actions. I doubt that this can be just achieved by computation. In this context, it may be interesting that Drs. Vernon Neppe and Edward Close recently proposed a “theory of everything” which they called the “Triadic Dimensional Distinction Vortical Paradigm” (see also: link). They stated that reality has three dimensions of space, three dimensions of time and three dimensions of consciousness – nine dimensions in total. I have, admittedly, not studied this theory in detail yet, having had other priorities in my life so far, but I consider the notion that there are three dimensions of consciousness, whatever that is supposed to be, highly interesting. A similar proposition has been made by physicist Dirk Meijer (“The mind may reside in another spatial dimension”, see link).
Also, the highly renowned theoretical physicist Edward Witten recently stated: “I tend to think that the workings of the conscious brain will be elucidated to a large extent. Biologists and perhaps physicists will understand much better how the brain works. But why something that we call consciousness goes with those workings, I think that will remain mysterious.” (Source: link)
Jacobsen: When I reflect on the nature of intelligence or the subject of the conversation for us, Claus, you focus on computational intelligence as this amounts to the field of specialization for you, which interests me. Rick, you wrote for broadcast television, specifically as a comedy writer for late-night television, for more than a decade. Your examples represent popular culture examples because the cultural stew of Los Angeles, California, where you live, worked, and continue to independently write with me. Of course, we discussed these examples in previous publications.
I note a few main points – and this may run into more and more questions. One is the division between more general and more specified applications for human utility. One former example being the robot butler. Something tasked for a broader set of purposes to serve human beings. One latter example being sensors on the sidewalk tied into some central processor underneath a city. Some things with a specific task and nothing more. According to Rick’s friend, there could be one trillion of these AIs, mostly, by 2100. Nonetheless, both assume functional utility to people.
However, taking off the late Marvin Minsky point with the society of mind, what about the butler? The robot butler could be upgraded with additional processing to have self-awareness beyond the rudimentary, even have a rich personality and internal dialogue life – able to entertain guests in the home as it serves them dinner. Rick, how might this play out? How has this played out in popular culture representations or in science fiction portrayals?
Rosner: Bear with me – I’ll get to the robot butler. The same friend who says that we’ll have a trillion AIs also says that technology is driven by sex, meaning that the internet is as developed as it is today because, among other things, it is an efficient pornography delivery system. To put it a nicer way, our humanity, via market forces, will continue to drive technology, even as we become what has been called transhuman. Whatever we turn into, we will still want friends and companions. We will be deeply embedded in social/computational networks. For the past 10,000 years and more, we have been the planet’s apex thinkers. That is changing. The new apex thinkers will be alliances between humans and AIs. As we grow in information-processing power, we will have AI friends and work partners. Eventually, much of future humanity + AI will become subsumed in a planet-wide information-processing thought blob, out of which individual consciousnesses will bud off, go about some business or pleasure, and possibly be reabsorbed. It’ll be weird but not a dystopia – positive values will continue to be embodied in the inconceivable swirl.
Most science fiction misses the mark. Someone said something like, “Science fiction is the present dressed up in future clothes.” It’s hard to predict and present the full, crazy complexity of the future. Star Trek basically presents the people of today (well, the mid-1960s) having standard adventures but on other planets with people in body paint and on a starship with doors that go “whoosh.” Star Trek is not what 250 years from now will look like – it’s incompletely imagined, with an emphasis on what is acceptable to TV executives and exciting to viewers without breaking the production budget. There’s a new show on Netflix called Altered Carbon, set 300 years in the future. According to Altered Carbon, people of the 24th century will have smokin’ hot but largely unaugmented bodies (20 hours a week at the gym + diuretics) and will spend much time naked or in nice underwear, humping, shooting and torturing each other. And the streets are grubby and rainy and neon-filled, because Blade Runner. (At least Blade Runner 2049 doesn’t pretend to be the future – its creators think of it as a meditation on the future – a bleakly poetic futuristic fantasia.) The denizens of the real 24th century will be highly transformed, inside and out. They probably won’t be as interested in sex as we are – there will be so much else for them.
Science fiction (movies and TV) does what’s easy. That includes actors portraying robots and rainy, Blade Runnery streets. Few productions attempt complete futures. I think Her is good because it’s set 10 to 15 years in the future, so there hasn’t been enough time for much to change. I like some authors because their futures seem more weird or complete – Neal Stephenson, but he doesn’t always write about the near future. The Diamond Age might be Stephenson’s best version of a near future, but it’s already 23 years old. In 2007, Clooney was supposed to make it into a series for the Sci-Fi Channel, but it didn’t happen. Charles Stross is good, particularly Accelerando. Cory Doctorow is good. David Marusek – especially his short story, “The Wedding Album.” Margaret Atwood, Ramez Naan, Paolo Bacigalupi, William Gibson. Blood Music, by Greg Bear, but it’s 33 years old. Women are underrepresented on my list, so, some links. Of course, most of these authors haven’t attempted all-encompassing versions of the near future.
Scott Douglas Jacobsen