The Singularity Summit is an annual event put on by the Singularity Institute ("bring[ing] rational analysis and rational strategy to the challenges facing humanity as we develop cognitive technologies that will exceed the current upper bounds on human intelligence"). In a nutshell, it can be rationally conceived that in humanity's near future artificial intelligence will be created that is smarter than us, and after that, all bets are off, hence the "singularity". You might say to yourself, just like surely there are some scientists making sure we don't get hit by a giant asteroid or that some bioweapon doesn't decimate the planet, surely there are some scientists or policy thinkers out there who are considering that and preparing for it, just in case, to make sure we don't end up in some Terminator-esque dystopia. These are those scientists and thinkers.
I should note upfront that this sort of serious exercise in long term thinking with a bent toward fantastic extreme technological possibilities attracts a certain kind of sci-fi nut type, and just like the naked people at burning man, they seem disproportionately represented in the image outsiders have of the group. So yeah, there is some of that, at all levels of the community really, but in general there is good rational thought going on.
The following is my brief recap of the event. I didn't actually take notes the first day, so my recollections can be a bit sparse there. Apparently the videos will be online later, I'll update with links then. I tried to spell check this but the system failed to work, so I apologize in advance.
Ray Kurzweil “From Eliza to Watson to Passing the Turing Test”
If the Singularity has a poster child, it is Kurzweil. He wrote the book on it and, I think, founded the Singularity Institute. I've seen him talk several times (I was even on his diet for awhile). His intro was fresh compared to often rehashed slides and points, but honestly I don't remember much of it. The basic message was that things are on track with recent events holding to earlier projections. He still puts 2029 as the target date for the Singularity. The thing I overheard people recounting from his talk most often was the concept that as one advancing paradigm of technology runs out, another seems to take its place. As in when vaccuuum tubes hit a physical laws wall, the transistor was forced to appear, as flat transistors reach the end of their run, we'll have 3D circuits (already 30% of memory chips are 3D (I think, this is from memory, heh)). This is a convenient concept for hand waving purposes, but convenience doesn't mean it's wrong, so ok I buy it.
Stephen Badylak: “Regenerative Medicine: Possibilities and Potential”
This was a remarkable presentation on the recent advancements in regenerative medicine. He demonstrates how new trials on humans have successfully regrown parts of humans that previously medicine had no way of repairing. The technique consists of grafting on an "extracellular matrix" (from a pig) onto wounded human meat. The matrix attracts endogenous stem cells from the patient to the wound site, which starts an amazing repair process. He showed this technique successfully being used on a soldier with massive loss of thigh muscle from a roadside bomb and a man with throat cancer.
Peter Thiel: “Back to the Future”
Thiel is an impressive guy, part of the impressive PayPal Mafia. His talk had lots of little pithy quotes, some of them are at Gubatron.com. Here's one more (paraphrased) the less privacy we have as a society, the more tolerance we need. I'm willing to trade off privacy for social tolerance.
Sonia Arrison: “100 Plus: How the Coming Age of Longevity Will Change Everything, From Careers and Relationships to Family and Faith”
Arrison, recently on TechCrunch TV, is pitching her new book where she muses not so much on how we are going to increase life expectancy in the US from 80 to 150 years, but what the implications are when we do it. It seems to be a light treatment of the subject, but an interesting starting point. And since it's inherently speculative, how deep can it reasonably go? The Thiel Foundation was giving them out for free so I got a copy.
James McLurkin: “The Future of Robotics is Swarms: Why a Thousand Robots are Better Than One”
This was very cool. Autonomous swarm robots, what's not to like? Robots swarming earthquake rubble, oil spills and and other planets ...the usual stuff. His main point seemed to be the research his team has done into how to do command and control of such artificial organisms. The idea is to have some of the bots take a internal scaffolding/skeletal role, others to understand that they are the edge role and workers in between. Also he was pimping his new affordable robot he wants to disseminate all over the world.
Michael Shermer: “Social Singularity: Transitioning from Civilization 1.0 to 2.0”
Shermer is an impressive thinker, an editor of Reason magazine and frequent talking head in debates on theological issues. He gave homage to one of my favorite books, Robin Wright's Nonzero, and basically takes his thesis (as I remember it from lack of notes) from there, which is that civilization will continue it's path toward becoming one big love fest because the arc of the moral universe bends toward justice (paraphrase of Theodore Parker).
If you get time, you might want to watch this 80 minute talk Shermer gave in NY recently on the subject of "The Believing Brain", his recent book.
Jason Silva: “'The Undivided Mind' — Science and Imagination”
Silva thinks that the those of us concerned with the singularity should adopt more sexy, populist modes of communication in order to instigate public awareness and debate on the subject. Either that or he wants us to tune in, turn on and drop acid. Look, I'm all for psychedelic-induced thought experiments as a way to broaden the mind of the individual, and by safe proxy, humanity. Hey it worked for Francis Crick when he came up with the breakthrough idea of perhaps the most important discovery of the 20th century. However the message on the benefits of mind expansion is better delivered by John Perry Barlow than this guy, who comes off as a tripped-out name-dropping trust-fund jet-setting raver-kid. I completely agree with his premise to an extent, but he takes it way too far. Frankly he reminds me of friends of mine, I'm sure I'd enjoy hanging out with him. But I wouldn't want any of these friends being the creative director behind Singularity awareness either. Check this out:
Less of that, please. It's like what David Brin talks about later. If you're trying to get a message through to people who are skeptical, you need to be very careful in how you explain your positions. You need to speak their language, not yours. This sort of message is just going to scare away the people we most need to convince.
This is a problem that goes all the way to the "top" for that matter. The recent documentary-slash-narrative film on the singularity starring Ray Kurzweil's beautiful female virtual avatar Ramona in the narrative role was also not the right way to convince people to take this stuff seriously. It was an interesting thought exercise for those who are already predisposed for this stuff, but it's not the way to recruit rational but unconvinced people. Oh, huh. Maybe that's what I'm not getting. Maybe they don't care about recruiting rational thinkers at this point?
Incidentally it's the same problem with the Occupy Wall Street movement. As long as the people representing the movement are the type most of America consider to be slackers, the movement is constrained.
Stephen Wolfram: “Computation and the Future of Mankind”
I am terribly embarrassed to admit this, but it was right after lunch and I haven't been getting enough sleep lately. I fell asleep through half of this. But the latter portion I was awake for was quite good. I'm looking forward to the video being posted so I can catch up with it. He was going over his New Kind of Science material and his WolframAlpha work. It was interesting but being groggy and not taking notes I'm hard pressed to relate it now. What I grokked for the first time was that his impetus for starting down the path of a New Kind of Science was the thought that if it seems possible that the entire universe can be reduced down to a simple computation then we ought to have a few scientists looking for that computation. Just like we have a few looking for the big asteroids or the origins of the universe. It's not a practical thing in a day to day sense really, but heck someone should be looking for it. He started looking.
Dmitry Itskov: “Project 'Immortality 2045' -- Russian Experience"
Between my sleepiness, his accent, the bizarre material and him losing his notes for a couple of minutes, I got up and went looking for caffeine. It had something to do with sexbots I think. Check it out here (actual URL): http://2045.com/(cool/
Christof Koch: “The Neurobiology and Mathematics of Consciousness”
But made it back in time for a wonderfully nerdy talk on consciousness. Koch and colleagues have worked out an equation that can be used to (possibly, it's still being tested) determine whether or not someone is conscious. It is founded on two observations.
- Consciousness is a highly differentiated state
- Consciousness is highly integrated
Using this basis, they developed a technique to calculate a value phi, Φ, that quantitatively measures consciousness. You can read all the details at Scientific American or download this pdf.
Eliezer Yudkowsky: “Open Problems in Friendly Artificial Intelligence”
Oh this was soooo awesome. I only grasped probably 1/3 of it, if that much. But I think it was all accessible with a little more time to process it. I'm probably going to rewatch this video several times.
Yudkowsky explains the logical problems with depending on self-modifying artifical intelligence to check itself such that it's modifications never violate previous rules it held. In other words how to keep friendly AI's friendly. There was a lot of logical notation, Godel and Bayes mixed in there. I'm not going to try to recap this because I didn't grok enough of it. But I found a paper that he wrote on friendly AI's.
I'm pretty sure I saw him speak at the Singularity Summit 2007 as well, here's a video from that talk:
Max Tegmark: “The Future of Life: a Cosmic Perspective”
This was a nice way to end the day. The bulk of the talk centers around his argument that we ARE alone in the universe, or we better hope we are. Using what looked like an awesome application called Deep Space Explorer he puts our place in the universe into perspective. Check it out below. It's a long video but in the talk for about 20 seconds he used the app to zoom out from earth to the solar system to the sub galaxy neighborhood to the whole galaxy, etc. All 3D and rotatable. The point is we might as well be a pebble in the ocean.
He then goes on to explain the problem with Drake's Equation which famously computes the probablity of there being non-Earth life in the universe. He believes the issue is with the terms that represent the fraction of planets that can support life, do support life and eventually support intelligent life. Those probablities can be incredibly low. There could be some step in the process in becoming a extra-planetary intelligence that is very difficult to complete. And we better hope that the difficult step is before the stage we have reached (so we have already passed it) rather than after it. Otherwise we still have a big hurdle to clear.
You can read about this in his own words here.
Alexander Wissner-Gross "Planetary Scale Intelligence"
How could a globe-spanning AI come about? Well, who is most incentivized to create it?
- Quantitative Finance, which has a goal of modeling human group behavior in the markets in order to efficiently allocation capital.
- Quantitative Advertising which is concerned with modelling the human mind to engineer better ways to sell to them.
He believes Quantitative Finance is most advanced and most economically coupled to humans so this is most likley what will drive it.
Right now stock exchanges are being rebuilt around low latency, causing incredible new networks to be built. Finance is driving us to the limits set by special relativity for passing information around the planet.
As an aside, see also this article about the "seismic terrestrial effects of the math we're making"
He believes that the logical physical placement of the distributed AI nodes can be determined by plotting the midpoints between the the worlds stock markets. He even shows the map of this, where most nodes are going to be in the middle of the oceans. Coordination will drive the AI.
The red dots are the stock markets, the blue points the midpoints:
He closes with why he believes Quantitative Finance is a blueprint for management of the singularity, where he lists how existing mechanisms will map to the ones needed for humans to control globe spanning super intellgent AIs.
- pre-trade algorithm testing -> source and binary audits
- Dark pools -> Vinge's "zones of thought"
- "Large trader" rule -> detailed registry of AIs with government, including human org charts
- Market circuit breakers -> Centralized ability to cut off AIs from outside world.
- Swap data repos (black box recording) -> Centralized AI activity recording
- Short term cap gains tax -> Tax or throttle AI bandwidth to outside physical and digital world
I emailed him these two questions, but no answer yet:
- The chart at the end nicely shows how existing systems can lead to appropriate AGI mechanisms. Are there any necessary AGI regulatory mechanisms that you don't see coming from existing Quantitative Finance systems?
- I don't know much about Quantitive Finance, but as I see it seem to take over more and more of the volume of trading, won't the impetus change, at some point in time perhaps pre-AGI, from modeling human behavior to modeling other Quant Algorithm behaviors?
Sharon Bertsch McGrayne: “A History of Bayes' Theorem”
As a bit of a math history nerd, this was interesting for 15 minutes. But she started to lose me pretty quickly. Off the rails in the end when she was unable to articulate what the Bayes Theorem is. Nice lady, I'm sure, but poor choice of presenter for this summit. I'm sure half the audience understood more about Bayes Theorem than she did. She should have been upfront about not really understanding it, but having researched some interesting anecdotes related to its history and usage. Then the Q&A session would not have been so awfully painful.
David Brin: “So you want to make gods. Now why would that bother anybody?”
In this light humorous presentation, Brin proposes to teach Singularity thinkers how to talk to religious skeptics. Points out that the Great Silence (no ET communication) may be due to "the grouches always win." Or in other words, the science haters stop progress. Rational thought is under attack, we need to "consider Judo." Speak the language, use the bible to draw them toward the light.
I didn't jot down or retain too many of these, although I agree with his central thesis. Here's a couple:
"Naming things" in Genesis is the only part of the bible that talks about what God intended humans to do before they screwed up and were cast out of Eden. It is the only pure moment of the bible that is evidence of what we were for. God wanted us to name things, and what is naming things but science?
The "cut them off at the knees" argument: The story of Jonah shows that God can change his mind. This won't probably win any arguments but it definitiely is a left hook they won't be expecting you to know how to throw.
Tyler Cowen: “The Great Stagnation”
This talk is based on his new book The Great Stagnation.
Cowen was very articulate and exhibited a rational thought process that I find refreshing. Here's my loose notes. He says we are approaching a time where over specialization is making it so regular people can't understand modern science. Because of financial incentives, a lot more human talent is going into ripping each other off rather than advancing humanity as a whole.
We couldn't build today's energy infrastructure from scratch with today's regulations.
He had a depressing slide on "Total Factor Productivity" which shows growth of national revenue based on novel ideas. It has totally leveled off over the past 30-40 years. We grow GDP through lots of tricky ways, but actual growth due to innovation has plateued.
Science is losing it's ability to attract popular opinion. It has ceased to tell a compelling story of the future.
In 2030 the US population demographics will resemble the current population of Florida.
The oil shock of the 70s caused the Stagnation, like the argument that leads to a romantic break up it's not the real reason for the collapse, but it brought all the real problems to bear.
The primary failing of financial innovation is the inability to monitor and gauge risk (as opposed to monetary policy like going off the gold standard).
You can hear him talk about it all in this 18 minute TED talk:
Tyler Cowen & Michael Vassar Debate The Great Stagnation
Vassar comes out looking like a Monty Python sendup of an intellectual. Sort of hilarious in his overdone sombre demeanor and attire, followed by an incomprehensible joke that "falls flat." He seems immediately outclassed. The debate, frankly, was better between Cowen and the audience. The only part I tuned into was when Cowen asks about the possibility of having an AI that can help you date, since that's pretty much what I'm building right now.
John Mauldin: “The Endgame Meets The Millennium Wave — Why the Economic Crisis will be History as We Create the Future”
Cringe! OMG. A creepy infomercial guy has invaded the stage! WTF. Who let this guy in? Oh, he's in some adoption cult. It must have been some inter-cult loan system like Link+ for crackpots. My bullshit active-defense filter sprung up too quick for me to hear any of this talk. Honestly I may even agree with whatever he was selling, but his delivery was just as bad, in a different but even less palatable way, as Jason Silva's. I spent most of the talk watching the two camera operators at far ends of the stage use hand signals to coordinate their efforts.
I found a video of him doing what is likely the same talk. Clicking through it quickly, it seems like his delivery isn't nearly as "snake oil salesman / evangelical preacher" in this one, so maybe I'll make time to get through this one:
Riley Crane: “Rethinking Communication”
How can we use new communication tools to engage people in new ways and further a cause, like the cause of science? He takes his physics research on how large groups of electrons organize and uses it in understanding how human systems organize. He found a lot of regularity in how social media is disseminated through society. I believe he related it to the Poisson average. He showed how people have observed a statistical fingerprint that describes procrastination in a study of how long it took Einstein and Darwin to respond to letters. People have behavior patterns.
To get large groups of people to do thing you are fighting the economics of attention.
Wisdom of crowds is great. But not all problems can be solved by aggreagation. Some problems require coordination or collaboration.
For example, in winning the DARPA Balloon Challenge, they concocted a number of smart virality tricks to assemble a vast network, but there was some sabotouge. Data cleaning helped some, but additional tricks were required (which he didn't have time to go into). The primary lesson from this seemed to be: Incentives drive participation. Don't tell the rabid viral marketers, they don't need any more incentive to annoy us.
He talks about of a third type of tie (Strong Ties/Weak Ties) that is need to be understood: Temporary Ties based on temporary contexts.
Ultimately, shaping behavior is about Attention, Incentives, Communication.
The story I took away from this one was about a woman who used online maps and twitter to quickly find the GPS locations for seven rescue operations in the aftermath of Haiti. Her efforts (and those of her collected network) allowed her to help save lives in Haiti from her office in Cambridge.
Here is a link to more from Riley Crane: Reality Mining & Red Balloons
Dileep George and Scott Brown: “From Planes to Brains: Building AI the Wright Way”
This was a tag-team from the folks at Vicarious Systems, a team that grew out of Numenta the company founded by Jef Hawkins and his theories outlined in On Intelligence.
The mammalian brain can do some amazing searches of possibility space in a short span of time. This ultimately indicates the neocortex has a lot of built-in assumptions via evolution. In working on AI, instead of "what are the algorithms" we should ask "what are the assumptions?" We can look at the neo-cortex for hierarchical structure that matches physical world hierarchies. This could indicate efficiency and re-use in data processing. Just as it is not necessary for an airplane to flap its wings to fly, it's not necessary to mimic the brain to be intelligent. So they use non-biologically-inspired logic in their AI system. They were able to (I think) in-line some non-bilogical logic and map it back to a biological circuit. He had some reasoning why they had to start with a vision system for his AI, but I didn't really get it. Something about connection to a Perception-Action system.
Jaan Tallinn: “Balancing the Trichotomy: Individual vs. Society vs. Universe”
Used a prezi! Sweet!
Starts the talk with a story about Stanislav Petrov, the guy who literally saved the world in 1983 by not following protocol and launching a soviet nuclear counter-attack to a false alarm.
For a long time, society didn't change much and the recipes for dealing with challenges were firmly embedded in culture. Our environment is changing so fast that society is no longer as equipped to deal with challenges as are individuals. There was something else here about "Future Society" that I missed.
Evolution has played a trick on us that keeps us from doing long term thinking. the trick is the Social Status Reinforcement Cycle, a reward system we are addicted to. It causes us to focus on short term results, have scale insensitivity, and do things that are easy to understand. If we attempt otherwise, we likely are not going to get the social status reward we crave.
His thoughts (and financial freedom due to selling Skype) to pursue solutions to this problem led him to want to help solve problems in the existential domain, as in the existence of our species, which led him to wanting to help determine how to make an AGI "do what we want." To that end he has donated money, involved himself in the research, and use his "street cred" to evangalise the work.
He proposes long term thinkers brand themselves as the CL3 Generation where Level 3 denotes not thinking abou the self, the immediate society, but the future society. One interesting thing is a suggested fund, which he would like to call the Petrov Fund that pays out in hindsight to those who have donated meaningful effort to solving long term problems.
By the way, there is already a Petrov Fund run by a woman in San Francisco. It's a 501(c)3 non-profit that accepts donations on behalf of the real Petrov who was fired from his job and living near poverty level in Russia.
David Ferrucci: “Watson AI Perceptions”
I'm not even going to try to summarize this wave of information on how the built IBM's DeepQA Watson supercomputer beat the pants off of the top human challengers. The key point was that they generated a set of answers with confidence values and a minimum threshold to buzz in. It was awesome. Thick with operational details. Looking forward to re-watching thisone as well.
Here, watch the results of a practice round, note in the last half you can see the answer sets on the monitor:
Dan Cerutti: “Commercializing Watson”
How to determine what to do with Watson now? Start with what Watson can do:
- It understands human language. That's a big deal.
- It can read near limitless content and never forget it
- It returns quantitatively confidence based answers
- Given training, it learns
and four orthogonal issues
- It takes a long time for the tech to mature, so it's a serious choice to decide where to start
- Need to find high value problems
- Need to work on problems where solutions are generalizable or scales
- Needs to be something that matters.
Suggestions:
- Finanance companies want to know what stocks to buy. Dan quipped "I'm not sure I'd sell that technology"
- Applications in legal analysis.
Things that made sense to the IBM team were defense applications and education, but they realized it is better for important, critical decisions that are made by human beings, many times per day, quickly, where there is a big gap between what is available and how quickly a human can digest it. They decided to focus on health care.
Hell yeah. Go Big Blue!
Ken Jennings: “The Human Brain in Jeopardy: Computers That 'Think'”
A crowd-pleasing closer where Jennings, the biggest winner in the history of Jeopardy, talks about his experience losing to a computer. Full of jokes about how Watson doesn't have to pee, and how it was like "an away game for humanity."
"This is what it looks like when the machines come for you"
My wrap up
All in all, a very good conference. There were two nude women slides and a cleavage shot during the presentations, no male sexuality exploited, so it scored mediocre on the sexist scale. It did better than mediocre on the no-crackpot scale, with maybe only 1 or 2 violators.
Next year it will be in San Francisco again so I will most likely return.
I do wish they had more structured opportunity to meet other attendees. One nightime activity would be nice. Some sort of official back channel (other than twitter) like tossing up an irc channel or setting up something on a mobile group app. An attendee wiki or forum even.