Keeping up with an industry as fast-paced as artificial intelligence is a tall order. So, until an AI can do it for you, here’s a helpful roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own.
This week in AI, OpenAI signed its first higher education customer: Arizona State University.
ASU will partner with OpenAI to bring ChatGPT, OpenAI’s AI-powered chatbot, to the university’s researchers, staff and faculty — holding an open challenge in February to invite faculty and staff to submit ideas for ways to use ChatGPT.
The OpenAI-ASU agreement illustrates the changing views on AI in education, as technology advances faster than curricula can keep up. Last summer, schools and colleges rushed to ban ChatGPT due to plagiarism and misinformation fears. Since then, some have vice versa their bans, while others have begun hosting workshops on GenAI tools and their potential for learning.
The debate over the role of GenAI in education is not likely to be settled anytime soon. But—for what it’s worth—I’m increasingly in the supporter camp.
Yes, GenAI is one bad summary. It is biased and toxic. He makes things. But it can also be used for good.
Consider how a tool like ChatGPT can help students who struggle with homework. It could explain a math problem step by step or create an essay outline. Or it could show the answer to a question that would take much longer to Google.
Now, there are legitimate concerns about cheating – or at least what can be considered cheating within the confines of today’s curricula. I’ve heard anecdotes about students, especially college students, using ChatGPT to write long pieces of writing and essay questions on take-home tests.
This is not a new problem – paid essay writing services have been around for a long time. But ChatGPT dramatically lowers the barrier to entry, some educators argue.
Exists evidence to suggest that these fears are exaggerated. But putting that aside for a moment, I say let’s step back and consider what prompts students to cheat in the first place. Students are often rewarded for grades, not effort or understanding. The incentive structure is skewed. So is it any wonder that children see schoolwork as boxes to check rather than opportunities to learn?
So let students have GenAI — and let educators test ways to leverage this new technology to reach students where they are. I don’t hold out much hope for drastic educational reform. But maybe GenAI can serve as a starting point for lesson plans that get kids excited about topics they’d never have explored before.
Here are some other notable AI stories from the past few days:
Microsoft Reading Tutor: Microsoft this week launched Reading Coach, the AI tool that provides students with personalized reading practice, available at no cost to anyone with a Microsoft account.
Algorithmic transparency in music: EU regulators are calling for laws to force greater algorithmic transparency from music streaming platforms. They also want to tackle AI-generated music — and deepfakes.
NASA’s robots: NASA recently demonstrated a self-assembling robotic structure that, Devin writes, might just become a critical part of off-planet travel.
Samsung Galaxy, now with artificial intelligence: At Samsung’s Galaxy S24 launch event, the company showcased the various ways AI could improve the smartphone experience, including through live translation for calls, suggested responses and actions, and a new way to search Google using gestures.
DeepMind’s geometry solver: DeepMind, Google’s AI R&D lab, this week unveiled AlphaGeometry, an artificial intelligence system that the lab claims can solve as many geometry problems as the average International Mathematical Olympiad gold medalist.
OpenAI and crowdsourcing: In other OpenAI news, the startup is forming a new group, Collective Alignment, to implement ideas from the public on how to ensure its future AI models are “aligned with the values of humanity.” At the same time, it is changing its policy to allow military applications of its technology. (Talk about mixed messages.)
A Pro plan for Copilot: Microsoft has launched a consumer-focused paid program for Copilot, the umbrella brand for its portfolio of AI-based content production technologies, and relaxed eligibility requirements for enterprise-level Copilot offerings. It has also released new features for free users, including a Copilot smartphone app.
Misleading Models: Most people learn the ability to deceive other people. So can AI models learn the same? Yes, it seems the answer — and terrifyingly, they’re extremely good at it. according to a new study by artificial intelligence startup Anthropic.
Tesla’s staged robotics demo: Elon Musk’s Optimus humanoid robot from Tesla is doing more — this time folding a t-shirt on a table in a development facility. But as it turns out, the robot is anything but autonomous at this point.
More machine learning
One of the things that hinders wider applications of things like satellite analysis with artificial intelligence is the need for training models to recognize what can be a fairly internal schema or concept. Determining the outline of a building: easy. Locating debris fields after floods: not so easy! Swiss researchers at EPFL hope to make it easier a program they call METEOR.
“The problem in environmental science is that it is often impossible to get a large enough data set to train artificial intelligence programs for our research needs,” said Marc Rußwurm, one of the project leaders. Their new structure for training allows a recognition algorithm to be trained for a new task with just four or five representative images. The results are comparable to models trained on much more data. Their plan is to graduate the system from a lab to a product with a UI for ordinary people (ie, non-AI researchers) to use. You can read the paper they published here.
Going in the other direction—imaging—is an area of intense research, as doing it efficiently could reduce the computational burden on productive AI platforms. The most common method is called diffusion, which gradually enhances a pure noise source into a target image. Los Alamos National Lab has a new approach they call Blackout Diffusionwhich instead starts from a pure black image.
This eliminates the need for noise in the first place, but the real progress is in the framework that takes place in “discrete spaces” rather than continuously, greatly reducing the computational load. They say it performs well and at a lower cost, but it’s certainly far from mainstream. I’m not in a position to evaluate the effectiveness of this approach (the math is way beyond me) but national labs don’t tend to advertise this for no reason. I will ask the researchers for more information.
Artificial intelligence models are sprouting up across the natural sciences, where their ability to sift signal from noise generates new insights and saves students hours of data entry.
Australia applies Pano AI’s fire detection technology in the ‘Green Triangle’, an important forest area. I love seeing startups put to use like this — not only could it help prevent wildfires, but it produces valuable data for forest and natural resource authorities. Every minute counts with wildfires (or wildfires, as they call them down there), so early warnings could be the difference between tens and thousands of acres of damage.
Los Alamos gets a second mention (I just realized as I go through my notes) as they are also working on a new AI model for estimating the fall of permafrost. Existing models for this have low resolution, predicting permafrost levels in chunks of about 1/3 of a square mile. This is certainly useful, but with more detail you get less misleading results for areas that may look like 100% permafrost on a larger scale, but are clearly less than that when you look closer. As climate change progresses, these measurements must be accurate!
Biologists are finding interesting ways to test and use artificial intelligence or neighboring models in many subfields of this field. At a recent conference written by my friends at GeekWiretools for monitoring zebrafish, insects, and even single cells were presented in poster sessions.
On both the physics and chemistry side, Argonne NL researchers are looking at how best to package hydrogen for use as a fuel. Free hydrogen is notoriously difficult to contain and control, so binding it to a special helper molecule keeps it tame. The problem is that hydrogen bonds with almost everything, so there are billions and billions of possibilities for helper molecules. But sorting through massive datasets is a machine learning specialty.
“We were looking for organic liquid molecules that hold hydrogen for a long time, but not so strongly that they can’t be easily removed on demand,” said the project’s Hassan Harb. Their system sorted through 160 billion molecules, and by using an AI screening method they were able to examine 3 million per second — so the whole final process took about half a day. (Of course, they were using a pretty big supercomputer.) They identified 41 of the best candidates, which is a confusing number for the experimental crew to test in the lab. Hopefully they find something useful — I don’t want to deal with hydrogen leaks in my next car.
To close with a word of caution, though: a degree in Science found that machine learning models used to predict how patients would respond to certain treatments were highly accurate…within the sample group they were trained on. In other cases, they basically didn’t help at all. That’s not to say they shouldn’t be used, but it backs up what many people in the business are saying: AI is not a silver bullet and needs to be thoroughly vetted in every new population and application applied to it.