DNA and RNA have been compared to “instruction manuals” containing the information needed for living “machines” to operate. But while electronic machines like computers and robots are designed from the ground up to serve a specific purpose, biological organisms are governed by a much messier, more complex set of functions that lack the predictability of binary code. Inventing new solutions to biological problems requires teasing apart seemingly intractable variables — a task that is daunting to even the most intrepid human brains.
Two teams of scientists from the Wyss Institute at Harvard University and the Massachusetts Institute of Technology have devised pathways around this roadblock by going beyond human brains; they developed a set of machine learning algorithms that can analyze reams of RNA-based “toehold” sequences and predict which ones will be most effective at sensing and responding to a desired target sequence. As reported in two papers published concurrently
Google’s Pixel 5 flagship smartphone drops the telephoto camera its predecessor offered for capturing distant subjects and switches instead to an ultrawide angle alternative good for photographing groups of people and indoor scenes. The shift follows Apple’s iPhone 11, which last year added an ultrawide lens.
An ultrawide camera also is useful for video, which often crops the outer portions of the frame to help stabilize footage. The ultrawide camera complements a traditional main 12-megapixel camera on the Pixel 5’s back and a front-facing 8-megapixel selfie camera on the front. The Pixel 5 starts at $700, but the same camera hardware is also used on the new Pixel 4a with 5G network support, too.
For more like this
Subscribe to the Google Report newsletter, receive notifications and see related stories on CNET.
In the September issue of the journal Nature, scientists from Texas A&M University, Hewlett Packard Labs and Stanford University have described a new nanodevice that acts almost identically to a brain cell. Furthermore, they have shown that these synthetic brain cells can be joined together to form intricate networks that can then solve problems in a brain-like manner.
“This is the first study where we have been able to emulate a neuron with just a single nanoscale device, which would otherwise need hundreds of transistors,” said Dr. R. Stanley Williams, senior author on the study and professor in the Department of Electrical and Computer Engineering. “We have also been able to successfully use networks of our artificial neurons to solve toy versions of a real-world problem that is computationally intense even for the most sophisticated digital technologies.”
In particular, the researchers have demonstrated proof of concept that their brain-inspired system
The British startup Photogram AI has announced a new camera called the Alice Camera
. It’s an “AI-accelerated computational camera” that aims to deliver better connectivity than a DSLR and better quality than a smartphone.
Smartphones have been making huge advances in the area of computational photography in recent years while traditional camera companies have largely been left in the dust. Alice is trying to bring the worlds of standalone cameras and computational photography together.
Alice is an interchangeable lens camera that features a dedicated AI chip “that elevates machine learning and pushes the boundaries of what a camera can do.”
“We’re a team of engineers, data scientists, and content creators and we’ve spent the last ten months building Alice because in our view cameras have seriously lacked meaningful innovation over the last ten years,” the startup says. “We believe you deserve an optical device more suited to the next