Today Google released an amazing Whitepaper titled Concrete Problems in AI Safety. If you have the time to read it, I highly recommend it.
For all those who like to read the highlights, let me do that for you here :).
The Google Research Team published what they consider to be the five problems that will become very important as AI is applied in more general circumstances. The team called them the “forward thinking, long-term research questions” that are minor today but will become more important to address in future systems. (AKA Skynet or the Matrix or Abstergo).
Here they are:
- Avoiding Negative Side Effects: How can we ensure that giving the AI task will not disturb everything else around it while pursuing it’s goals. A Vacuuming AI that knocks over a vase as it goes faster to complete it’s goal faster.
- Avoiding Reward Hacking: How do we enable a cleaning robot to clean and not just cover up our messes?
- Scalable Oversight: An AI system that gives feedback as it performs a task, it needs to use feedback efficiently to not keep bothering humans like the old Microsoft Paper Clip in Word.
- Safe Exploration: A mopping robot could be free to try different mopping techniques, but we wouldn’t want it to stick a wet mop into a live electrical socket.
- Robustness to Distribution Shift: A Robot that learns heuristics for a factory work floor may not be safe enough for an office.
Although these are great things to think about, there are more issues that need to be talked about and covered. I would suggest reading the White Paper if you want.
We believe in rigorous, open, cross-institution work on how to build machine learning systems that work as intended. We’re eager to continue our collaborations with other research groups to make positive progress on AI.
We maybe a little while off until we have the robots we see in most Sci-Fi films, but I believe Google Research is on the right path today!