Stanford launches AI institute to focus on humanitarianism | March 22, 2019 | Palo Alto Weekly | Palo Alto Online |

Palo Alto Weekly

News - March 22, 2019

Stanford launches AI institute to focus on humanitarianism

Bill Gates, Gov. Gavin Newsom discuss the promise of the technology

by Sue Dremann

How can artificial intelligence help improve human life and solve intrinsic world problems such as infant mortality and flooding? How does society protect against human obsolescence because of AI and how does society prevent a technological backlash?

This story contains 742 words.

Stories older than 90 days are available only to subscribing members. Please help sustain quality local journalism by becoming a subscribing member today.

If you are already a subscriber, please log in so you can continue to enjoy unlimited access to stories and archives. Subscriptions start at $5 per month and may be cancelled at any time.

Log in     Subscribe

Staff Writer Sue Dremann can be emailed at sdremann@paweekly.com.

Comments

7 people like this
Posted by Here's an Idea
a resident of Crescent Park
on Mar 19, 2019 at 8:15 am

How about using AI to search through recent college applications to spot fraudulent test scores, bogus athletic participation, doctored photos, and things like that?

AI is great at this. It can find hidden patterns, such as cases of poor high school performance coupled with stellar SAT scores from applicants with high-income parents.

And Stanford is the ideal place for such research, since the university already has easy access to a huge collection of applications that apparently includes these exact types of problems.

I look forward to the new institute's published findings on this!


9 people like this
Posted by AnthroMan
a resident of Stanford
on Mar 19, 2019 at 9:29 am

The excessive use of 'metrics' (computerized statistical analysis) runs the risk of dehumanizing rather than further humanizing mankind.

A sense of morality & ethics cannot be programmed into computers...it cannot even be instilled in humans.

This is just another high-tech effort to circumvent & interfere with human life in general. It's OK for predicting hurricanes & the outcome of football games but as a means of making key decisions regarding humanity, a computer should not even be a part of the picture.

Computerized fascism will not be a pretty sight & advocates of allowing even more AI to monitor & control our lives represent the new Fuhrers of the Millennium.


4 people like this
Posted by Anon
a resident of Another Palo Alto neighborhood
on Mar 19, 2019 at 9:33 am

Posted by AnthroMan, a resident of Stanford

>> Computerized fascism will not be a pretty sight
^^^^^^^^
-is not-

-- "Anon"


14 people like this
Posted by AJL
a resident of Another Palo Alto neighborhood
on Mar 19, 2019 at 10:10 am

I think this is great step, but I think if they truly want a humanitarian and human-focused approach to AI, they also need what I call: “temporal ergonomics” — understanding and improving how technology intersects with functioning and autonomy of humans who have finite time.

This effort is still far too top-down — to get “a more human mission,” they must bring ordinary people who need artificial intelligence solutions in their daily lives into the problem-solving arena. Rather than just studying temporal ergonomics, or giving people a little money to cope with their technology (as in Singapore), or bringing in a few young women in hoodies, ordinary people need to be empowered to develop temporally ergonomic technology that works for individuals and continually makes them better.

Everyone should read MIT Professor Eric von Hippel’s book Democratizing Innovation (free on the internet). His group’s research found that people who innovate — do something new and unexpected that solves a problem — have certain characteristics. They experience a problem themselves and expect to benefit from solving it, and are willing to be the first to do so. Necessity is indeed the mother of invention. Big sports companies looking to innovate in bicycles didn’t create the mountain bike (now the face of the industry), enthusiasts who needed something the big companies would never develop from all the focus groups in the world did.

Closer to home, today I am faced yet again with the mind-numbing task of slogging through medical paperwork, replete with tricks and “mistakes” generated by my multi-billion-dollar artificial-intelligence-enabled insurance company. I will, yet again, have to use my limited cognitive, temporal, and financial resources — instead of working on writing about an actual solution to a serious problem affecting a lot of people — to avoid being bankrupted by my healthcare and to try yet again to force yet another corporate behemoth to honor their contract. The technological-age version of your money or your life. When I am done, maybe later next week, after sacrificing resources I could have spent doing something productive, I will then slog through the new tax rules, including yet hours and hours more of mind-numbing paperwork, with all its attendant, complicated side tasks.

Having personal AI could spare me and my family so much. Having an artificially intelligent assistant who could competently scan and sort the paperwork and keep track throughout the year (without me being the robot assistant doing all the interface, technical support, secretarial support, and backstop tasks), and discuss the tasks and issues at hand, taking direction and even doing tasks for me while allowing me to manage from an executive level, and solving the various technological problems that crop up, would free up so much of my life for my family.

Something as simple as having artifiical intelligence assistant instantly review End User License Agreements and privacy policies when I need it, given my own values, and then suggest alternatives to me to accomplish what I am after, would not only help me, if millions of other people had such assistance in their lives, it would redirect the incentives of the industry in a more positive direction in a million ways (such as not trying to insert traps in EULA’s because no ordinary person has the time to read and evaluate all of them). It just seems like any technological need of today results in a cascade of nested technologically-related tasks of indeterminant (and uncontrollable) time drain. Temporally ergonomic artificial intelligence assistants could help level the playing field and allow ordinary people to be more effective with technology with less of all the burdens we have come to expect.

“Autonomy” is key, because the promise of technology, even the “solutions” discussed above, too often becomes a burden for ordinary people. For too much of my life, in too many ways, I have become the “robot” who has to spend my time, money, and mental energies serving the technology. For example, when things are “upgraded”, too often the upgrade serves a purpose for the technology company and requires more time, energy, and money from me while either adding nothing to my functionality, disrupting my workflow and requiring new tasks of me when the former technology was working fine, or worse, taking away my functionality altogether.

Instead of the Six-Million-Dollar-Man model of technology — making me better than I was, stronger, faster, which artificial intelligent could do now — the technology keeps hitting the reset button on MY life. It is a situation crying out for an artificial intelligence solution to make ordinary people like me far more functional, allowing us as creative humans the ability and autonomy to solve the problems (using artificial intelligence where we might). There is a big difference between a company presuming to solve a problem for millions of people, and empowering millions of people to solve the problems (often created by technology) in their own lives.

I went to an educational conference last year and in pretty much every session, regardless of what it was about, someone had a question about how they could solve yet another problem with how technology was seriously INTERFERING with the educational situation at hand. They need the technology to do what they are doing, yet the technology is practically booby-trapped for utter lack of temporal ergonomics.

Fei Li hit the nail on the head when she brought up the lack of diversity in technology development. The biggest problem with that from a humanitarian problem-solving standpoint is that the people developing the technology typically have no experience with the problems they are creating for everyone else. Young energetic males who have never experienced the burden of chronic health problems, never dealt with a confluence of crushing life circumstances like losing a home in a disaster while caretaking for an adult with Alzheimer’s, never had to sort through a crush of papers created by a hostile entity or unjust legal situation to save their business, they have no appreciation, for example, for how damaging an “attention merchant” economic model — employing brain science to essentially addict people to their technological devices and steal their time and autonomy — is in the lives of real people.

Fei Li: a bunch of fellow parents and I have been talking for a long time about writing a letter to Carnegie Mellon professors to request exactly this, help developing personal AI assistants WITH US, so that instead of technologists creating yet another burden (or way to replace humans), technology does things to make us better, in ways that level the playing field and that takes over tasks that currently, only humans can do. Is there a place in this effort for us?


4 people like this
Posted by Annette
a resident of College Terrace
on Mar 19, 2019 at 11:27 am

Annette is a registered user.

We need to keep the human in humanity so it is reassuring that Stanford has created this institute. A recent WSJ article, "The Autocrat's New Tool Kit" focused on how AI can be used to build a dystopian world. Truly scary potential; this institute's work is much needed.


5 people like this
Posted by CrescentParkAnon.
a resident of Crescent Park
on Mar 19, 2019 at 11:29 am

Reading about the Mafia boss that was assassinated in NY the other day I wonder with all this AI and NSA surveillance we have ... why is there even such a thing as a Mafia boss anymore?

If all this technology is not taking care of these criminals and other systems of corruption, what is it good for? ... because sooner or later all this technology and power will be used by the criminals against us - if it is not already. We seem to have a "boss" of some sort as our leader today, and a lot of the people who suport him act like thugs.


15 people like this
Posted by Guiseppe
a resident of Greenmeadow
on Mar 19, 2019 at 1:22 pm

Too much focus on artificial intelligence...not enough on expanding actual human intelligence.


10 people like this
Posted by AnthroMan
a resident of Stanford
on Mar 19, 2019 at 8:14 pm

> Having an artificially intelligent assistant who could competently scan and sort the paperwork and keep track throughout the year (without me being the robot assistant doing all the interface, technical support, secretarial support, and backstop tasks), and discuss the tasks and issues at hand, taking direction and even doing tasks for me while allowing me to manage from an executive level, and solving the various technological problems that crop up, would free up so much of my life for my family.

It's called cybernetics & the concept/science has been around since the beginning of the Industrial Revolution. It replaces people with machines/robots & in theory, leads to higher production/efficiency/QA and...Human unemployment or the necessity for retraining individuals with new or different job skills.

AI does have its potential. Japan is creating very human-like robots with programmed personalities to serve as surrogate companions & lovers. This in turn could alleviate personal loneliness and perhaps even reduce domestic violence as destroying one's robot would be akin to tossing a chair through a television screen. No crime involved...just go out & buy a new robot mate.



7 people like this
Posted by AJL
a resident of Another Palo Alto neighborhood
on Mar 20, 2019 at 11:54 am

Ah, but AnthroMan, you have missed my point. My main point is not really about a specific tangible implementation of technology (e.g., cybernetics versus software), but about “temporally ergonomic” technology — technology focused always on making me more than I was in the whole context of my human finiteness. I want upgrades that always take me from where I am as a finite human being, already enabled by technology in the context of my existing life, that are designed to make me even better without requiring me in any way to backtrack or buy (and spend time patching together) new things to keep doing what I was already doing just fine.

What I am discussing is the opposite of replacing humans with technology — I am asking for technology that does what I do in my life as a human BUT THAT NO ONE ELSE CAN OR WILL (even if I could afford it), like making mincemeat out of my paperwork in the course and context of my life. I am then able to do more as a human because having that gives me back my time and autonomy. Human-focused technology should enable ME, without burdening my time, attention, finances, or life — it should be temporally ergonomic. It shouldn’t require me to be the tech support, secretary, repairperson, backstop for all interface tasks, amateur lawyer, just to use it.

The problem is that the entire thrust of the technological age has completely missed the point about temporal ergonomics. The focus has been, as you aptly pointed out, on replacing people, not on enabling them to be better than they were.

Let me give you a simple, current example: addiction to videogames.

There is a huge industry that makes great entertainment, with a dark side that I’m not even going to spending time describing, because it affects millions of people but doesn’t affect everyone equally, for many reasons. There are beneficial sides, too, which I think Jane McGonigal is a great evangelist for in her books, which also don’t affect everyone equally.

Regardless, the beneficial side isn’t offered without the dark side. In some ways (but not all), they are inextricable: video games take advantage of our human brains in a similar way to the way movies take advantage of our human brains to draw us through a story. There is a reason it is more difficult for (most) people to get up and leave a movie at certain dramatic markers in the middle of a good movie than when the dramatic arc has resolved at the end. Human stories hack our brains, just a little.

But video games don’t design in the same kind of resolution that a half-hour sitcom or a 90-minute movie do. They (and the platforms they function on) are designed to just keep us there. I’m always surprised that so many people don’t know this, but a good place to start in understanding why the industry is so NOT temporally ergonomic is Anderson Cooper’s 60 Minutes stories on brain hacking: Web Link

Parents dealing with what I call the “Dementor on the desk” (nods to Harry Potter) that seems to suck their children’s consciousness and attention away, can’t create their own family boundaries where their children get the benefits (as required from school) without also inviting in the dark sides. Because of the industry’s “brain hacking” and the “attention-merchant” economic model, as Tristan Harris says, it’s not fair to make it about willpower of a child when there are a thousand people on the other side of that screen trying to keep them there. I do remember a time, before the graphical interface, when computers really weren’t addicting. The technology alone isn’t the problem.

The segment of our society most affected by this are families. Parents struggle with videogame addiction in a spectrum of ways, but what they don’t have is the choice to accept the good without the bad.

If I had the kind of artificial intelligent assistant in my home that I would wish for, I would be able to get help solving the problem, for example, to allow my children to play video games (even get the benefits of gaming that McGonigal describes), but I could ask the assistant to fix the addiction problem. For example, if my children wanted to play a specific videogame for 90-minutes, I could ask the AI assistant to write code to draw them through an arc of resolution, the same way a movie ending does, so that my kids could transition seamless to that and leave the game after a predetermined amount of time, entertained, happy, and ready to move onto something else. If me and my AI assistant came up with something really good, maybe we could even create whole original videogames together that created a satisfying experience from start to satisfying finish, over a pre-determined amount of time decided on by the user, and sell it.

Even without writing a whole new videogame experience, I could envision an AI assistant that could monitor a person/child otherwise using a technological device and create agreeable counter measures when attention merchant tactics make it difficult for the child to optimally use the technology for education (with a MINIMUM of screen time) and LEAVE. Such an AI assistant could ONLY be designed to be temporally ergonomic in the real context of lives like mine, not in isolated labs staffed by a whole industry of people who have never experienced such problems.

I always point out to people that Steve Jobs had an assistant (person) whose only job was to make sure his technology worked the way he needed it to work so he could USE it for what he wanted, when he wanted, and not have to fiddle with all the problems the rest of us do, all those nested tasks of indeterminant time drain (from dealing with malfunctioning routers to choosing whether to spend time reading an EULA for privacy concerns). For the rest of us, an AI assistant could achieve the same thing. It wouldn’t replace a human being since the rest of us mortals could never hire an assistant like Jobs did.

My point is that such technology would empower ME as a human being. My point is that I can wait until the cows (never) come home for technologists to create those solutions for me. But they won’t, they don’t have an incentive to and haven’t for most of the technological age, they don’t even remotely understand the problem, or have a population in the industry even capable of understanding it.

I once called in to a radio show to put these points in front of an AI expert, asking that AI be used to make ME more effective in my life, and he answered by saying the technology already does that, completely and utterly missing the point (and missing that he was WRONG). I’m afraid this is a pretty consistent reaction from people in the industry. They just. don’t. get it. But not long after, I called in to speak with a doctor who wrote a book about where technology was not meeting its promise and sometimes impeding medicine, and I brought up temporal ergonomics and how, just creating technology to do or replace a doctor doing a given task doesn’t necessarily help the doctor become a better doctor. It can even create complexity that impedes the doctor’s effectiveness (which is what her book was in part about). The technologist completely missed my point, whereas the doctor totally got it, immediately. And now AI technologists are looking for where they can most easily replace doctors, instead of figuring out how to make existing doctors far, far more effective through temporally ergonomic technology. It’s not nearly the same thing.

If someone truly has a mission to humanize AI, then it must be first and foremost to democratize and distribute it, in a way that allows individuals to solve problems in their own lives — to give humans full control of their time and efforts. Temporal ergonomics is essential. The very first problems we ordinary humans will solve are those we face in ordinary life that technology has created, and AI in the hands of much bigger, more powerful, wealthier entities (like insurance companies and attention merchants) make far worse.


8 people like this
Posted by The Best Of Both Worlds
a resident of Portola Valley
on Mar 20, 2019 at 12:41 pm

> Japan is creating very human-like robots with programmed personalities to serve as surrogate companions & lovers.

This is the the key...a personal assistant to handle all of AJL's administrative priorities while providing comfort sans any sexual harassment allegations.

Just remove the batteries when things get out of hand.


4 people like this
Posted by CrescentParkAnon.
a resident of Crescent Park
on Mar 20, 2019 at 2:30 pm

> I always point out to people that Steve Jobs had an assistant (person) whose only job was to make sure his technology worked the way he needed it to work so he could USE it for what he wanted, when he wanted,

That seems to be unequivocal argument that he was just motivated by selling a fantasy device that really did not work. That he never really used his own devices well enough to know how to set them up and fix them. Technology has been a boondoggle for so long that we barely even perceive it any more.

A comment was written about GMO's in Steven Drucker's book "Altered Genes, Twisted Truth" where a GMO industry spokesperson made the statement that if Americans want to be first in technology , they need to accept being guinea pigs.

Routinely software products are brought to market without testing, and just letting users complain about the things they do not like - fixing them on an as-needed basis. If you think that works think about Boeing's latest crash which first indications indicate was software and training issue.


1 person likes this
Posted by Looking for Owls
a resident of East Palo Alto
on Mar 20, 2019 at 3:33 pm

Great!, I thought to myself as the title of the article caught my eye.
Wisdom, not just narrow intelligence to counteract what Isaac Asimov formulated as: "The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom."

That excitement turned into a cry-or-laugh-questionmark-reaction. I'm a medical doctor. When we start thinking that a low dose of antibiotics is the solution to child mortality in developing countries to correct their microflora, or doing DNA-studies on women in Africa who give premature births for customizing their diets (essentially) - coming from this group of Stanford-interdisciplinary-stars - maybe we should just let AI take over?

"There are some ideas so absurd that only an intellectual could believe them."
- George Orwell

This is absolutely outrageous. Resources are limited. We have to do a better job at choosing and formulating problems, and then solving them in an equally disciplined way, and yes, AI might very well play its part.
And I'm still waiting for wisdom to catch up with science (which I highly respect).




4 people like this
Posted by AJL
a resident of Another Palo Alto neighborhood
on Mar 20, 2019 at 3:34 pm

"That seems to be unequivocal argument that he was just motivated by selling a fantasy device that really did not work."

Or, it's an indication of just how little respect technologists have for the rest of us (and our time), when the company that did the most for "humanizing" computers never developed a sense that other people value their time as much as Jobs did his.

I believe the next "killer app" of technology will be democratizing AI to allows people to never be burdened by their technology in the way users/consumers have been in the last 30 years, but instead to be freed and allowed to become always better and more effective per their own goals.

The big revolutions in computers (aside from the usual obvious) are: going from no screen to a screen, going from line input to a graphical interface. The next should/will be to free us from the constraints of rapidly obsoleted aracana that technology imposes on us in myriad ways. That is what Steve Jobs was buying his way out of with a human to do those tasks for him, so that he was free to just be effective with the technology.


Like this comment
Posted by Don
a resident of Another Palo Alto neighborhood
on Mar 20, 2019 at 7:55 pm

This long conversation reminds me of a more succinct notion, a split in what is now loosely called AI. A little more than 50 years ago there was emerging from the likes of John McCarthy and others the kind of AI that imitates human activity. It proceeded for some decades but its approach was complex and ultimately not very successful. AI faded from view and underwent name changes like "machine intelligence". The other approach taken at the outset was called augmented intelligence and its principal advocate was Douglas Engelbart. He simply wanted computing to enhance one's, or more favorably a group's ability to tackle problems otherwise unsolvable. That has certainly come to pass in examples like the human genome. Imitating humans or augmenting them was the bifurcation a half century ago. The boundaries are now murky and AI has come roaring back with substantially new approaches. Just maybe the distinction is getting increasingly academic.


3 people like this
Posted by AJL
a resident of Another Palo Alto neighborhood
on Mar 20, 2019 at 10:37 pm

@Don,
Douglas Englebart was ahead of his time and had so much heart. If only that were the face of technological development!


Like this comment
Posted by Deng Zhao
a resident of Charleston Meadows
on Mar 21, 2019 at 3:17 pm

> Japan is creating very human-like robots with programmed personalities to serve as surrogate companions & lovers.

Same in China but not as friends and lovers. As soldiers.

Raw materials from recycling and technology from US and Japan + our own.


Don't miss out on the discussion!
Sign up to be notified of new comments on this topic.

Email:


Sorry, but further commenting on this topic has been closed.

Get fact-based reporting on the COVID-19 crisis sent to your inbox daily.

 

Who is your local hero?

Whether they're grocery shopping for a neighbor or volunteering for a nonprofit, you can spread the joy and support our journalism efforts by giving them a shout-out.

Learn More