Fitbit for the flu: Researchers show the fitness wearables can help track outbreaks

Fitbit for the flu: Researchers show the fitness wearables can help track outbreaks

Posted Wednesday, Jan 22, 2020 by Jeff Safire

A new study showed that by employing secondary signals from heart rates, physical activity and sleep quality, common Fitbit trackers may be able to predict the spread of the flu in real time and better than current infectious disease surveillance methods.

By Conor Hale, FierceBiotech

Fitbit-Flu-tracking
Using de-identified data from more than 47,000 Fitbit users across five states, researchers were able to evaluate over 13.3 million daily measurements and track deviations from each individual’s personal norm. (Fitbit)

People’s resting heart rates tend to be faster when they’re sick with a disease like influenza and are paired with changes in sleep routines and, of course, a tendency to be more sedentary than usual, according to research published in The Lancet Digital Health.

Using de-identified data from more than 47,000 consistent Fitbit users across five states, researchers were able to evaluate over 13.3 million daily measurements. By tracking deviations from their personal norms, they compared the proportion of users with abnormal readings to weekly estimates of flu-like illness rates from the Centers for Disease Control and Prevention.

Across each state, data from Fitbit trackers helped improve influenza predictions. While additional prospective studies will be needed to differentiate between infectious versus non-infectious disease forecasting, traditional flu surveillance methods may take from one to three weeks—greatly limiting the measures clinicians can take to respond to an outbreak.

“Responding more quickly to influenza outbreaks can prevent further spread and infection, and we were curious to see if sensor data could improve real-time surveillance at the state level,” study author Jennifer Radin of the Scripps Research Translational Institute said in a statement.

Read full article…

This article originally appeared at FierceBiotech | MedTech on Jan 17, 2020.

 


No Comments

Ferroelectric Semiconductors Could Mix Memory and Logic

Ferroelectric Semiconductors Could Mix Memory and Logic

Posted Tuesday, Jan 14, 2020 by Jeff Safire

New 2D materials made into synapse-like devices for future neuromorphic chips

By Samuel K. Moore, IEEE Spectrum

ferroelect-semi-mix
Photo: Vincent Walter/Purdue University

Ferrorelctric semiconductors could be the basis of a high-density memory for neuromorphic chips.

Engineers at Purdue University and at Georgia Tech have constructed the first devices from a new kind of two-dimensional material that combines memory-retaining properties and semiconductor properties. The engineers used a newly discovered ferroelectric semiconductor, alpha indium selenide, in two applications: as the basis of a type of transistor that stores memory as the amount of amplification it produces; and in a two-terminal device that could act as a component in future brain-inspired computers. The latter device was unveiled last month at the IEEE International Electron Devices Meeting in San Francisco.

Ferroelectric materials become polarized in an electric field and retain that polarization even after the field has been removed. Ferroelectric RAM cells in commercial memory chips use the former ability to store data in a capacitor-like structure. Recently, researchers have been trying to coax more tricks from these ferroelectric materials by bringing them into the transistor structure itself or by building other types of devices from them.

In particular, they’ve been embedding ferroelectric materials into a transistor’s gate dielectric, the thin layer that separates the electrode responsible for turning the transistor on and off from the channel through which current flows. Researchers have also been seeking a ferroelectric equivalent of the memristors, or resistive RAM, two-terminal devices that store data as resistance. Such devices, called ferroelectric tunnel junctions, are particularly attractive because they could be made into a very dense memory configuration called a cross-bar array. Many researchers working on neuromorphic- and low-power AI chips use memristors to act as the neural synapses in their networks. But so far, ferroelectric tunnel junction memories have been a problem.


Read full article…


This article first appeared at IEEE SPECTRUM on Jan 14, 2020.


No Comments

FDA approves Tandem’s closed-loop artificial pancreas

FDA approves Tandem’s closed-loop artificial pancreas

Posted Monday, Dec 16, 2019 by Jeff Safire

FDA approves Tandem’s closed-loop artificial pancreas to automatically control insulin doses

By Conor Hale   Dec 16, 2019

The FDA approved its first automatic insulin dosing system designed to deliver correction boluses as well as adjust background insulin levels to help prevent bouts of high and low blood sugar in people with Type 1 diabetes.

Tandem Diabetes Care’s Control-IQ artificial pancreas system also includes the agency’s first interoperable automated dosing controller, making it capable of connecting with different continuous glucose monitors (CGMs) and alternate controller-enabled insulin pumps, or ACE pumps.

tandem pancreas pump

The latest approval completes the trio of swappable controlling software, insulin pumps and continuous glucose monitors—paving the way for what the FDA describes as a personalized and complete automated insulin dosing setup, or AID system. (Tandem Diabetes Care)

The company plans to make the new software features available free of charge as a update in January 2020 for current users of its t:slim X2 insulin pumps, which received de novo clearance this past February. New pumps equipped with the Control-IQ algorithm are expected to begin shipping at the same time.

“With this clearance, we will be launching the most advanced automated insulin dosing system commercially available in the world today,” Tandem’s president and CEO, John Sheridan, said in a statement.

The pump and controller aim to predict a person’s glucose levels a half-hour ahead and adjust its doses automatically using blood sugar data from Dexcom’s G6 monitoring system. This includes reducing or halting basal insulin delivery if glucose levels drop too low, or calculating an hourly correction bolus if they’re too high—without the need for fingerstick draws or mealtime calibration. The device also includes tailored settings for periods of sleep or exercise.

In a clinical study published earlier this year, the system outperformed current treatments and helped users keep their blood glucose levels within a healthy range an average of 2.6 hours longer. It also had fewer sharp spikes or drops over a 24-hour period.

“Regulatory authorization of the Tandem Control-IQ algorithm for use as part of a hybrid closed-loop system is a huge win for the Type 1 diabetes community and a critical step forward in making day-to-day life better for people living with the disease,” said Aaron Kowalski, president and CEO of the Juvenile Diabetes Research Foundation.

Read full article →


This article appeared first at FierceBiotech on Dec 16, 2019.




No Comments

Machine Learning vs. AI, Important Differences Between

Machine Learning vs. AI, Important Differences Between

Posted Sunday, Nov 17, 2019 by Jeff Safire

Unfortunately, some tech organizations are deceiving customers by proclaiming using AI on their technologies while not being clear about their products’ limits

By Roberto Iriondo
August 23, 2019

ML vs. AI

Machine Learning Open License — Image: IoT World Today

Recently, a report was released regarding the misuse from companies claiming to use artificial intelligence on their products and services. According to the Verge, 40% of European startups that claimed to use AI don’t actually use the technology. Last year, TechTalks, also stumbled upon such misuse by companies claiming to use machine learning and advanced artificial intelligence to gather and examine thousands of users’ data to enhance user experience in their products and services.

Unfortunately, there’s still a lot of confusion within the public and the media regarding what truly is artificial intelligence, and what truly is machine learning. Often the terms are being used as synonyms, in other cases, these are being used as discrete, parallel advancements, while others are taking advantage of the trend to create hype and excitement, as to increase sales and revenue.

Below we will go through some main differences between AI and machine learning.

 

What is machine learning?

Quoting Interim Dean at the School of Computer Science at CMU, Professor and Former Chair of the Machine Learning Department at Carnegie Mellon University, Tom M. Mitchell:

A scientific field is best defined by the central question it studies. The field of Machine Learning seeks to answer the question:
“How can we build computer systems that automatically improve with experience, and what
are the fundamental laws that govern all learning processes?”

Machine learning (ML) is a branch of artificial intelligence, and as defined by Computer Scientist and machine learning pioneer Tom M. Mitchell: “Machine learning is the study of computer algorithms that allow computer programs to automatically improve through experience.” — ML it’s one of the ways we expect to achieve AI. Machine learning relies on working with small to large data-sets by examining and comparing the data to find common patterns and explore nuances.

For instance, if you provide a machine learning model with a lot of songs that you enjoy, along their corresponding audio statistics (dance-ability, instrumentality, tempo or genre), it will be able to automate (depending of the supervised machine learning model used) and generate a recommender system as to suggest you with music in the future that (with a high percentage of probability rate) you’ll enjoy, similarly as to what Netflix, Spotify, and other companies do.

In a simple example, if you load a machine learning program with a considerable large data-set of x-ray pictures along with their description (symptoms, items to consider, etc.), it will have the capacity to assist (or perhaps automatize) the data analysis of x-ray pictures later on. The machine learning model will look at each one of the pictures in the diverse data-set, and find common patterns found in pictures that have been labeled with comparable indications. Furthermore, (assuming that we use a good ML algorithm for images) when you load the model with new pictures it will compare its parameters with the examples it has gathered before in order to disclose to you how likely the pictures contain any of the indications it has analyzed previously.

ML vs AI 2
Supervised Learning (Classification/Regression) | Unsupervised Learning (Clustering) | Credits: Western Digital

The type of machine learning from our previous example is called “supervised learning,” where supervised learning algorithms try to model relationship and dependencies between the target prediction output and the input features, such that we can predict the output values for new data based on those relationships, which it has learned from previous data-sets fed.
Unsupervised learning, another type of machine learning are the family of machine learning algorithms, which are mainly used in pattern detection and descriptive modeling. These algorithms do not have output categories or labels on the data (the model is trained with unlabeled data).

This article was first published at Medium | Data Driven Investor on Oct 15, 2018, and updated on Aug 23, 2019.

No Comments

Transhumanism: Where Physical and Digital Worlds Meld

Transhumanism: Where Physical and Digital Worlds Meld

Posted Thursday, Oct 3, 2019 by Jeff Safire

New whitepaper explains how augmented machines and augmented humans will represent physical reality

By Kathy Pretz, IEEE Spectrum



Illustration: iStockphoto

THE INSTITUTE Transhumanists say someday technology will dramatically enhance human intellect and physiology. That day might arrive sooner than we think.

People are already leveraging machines to improve their well-being and athletic performance and to extend their knowledge. Think fitness trackers, exoskeletons, and artificial intelligence.

In the “Augmented Machines and Augmented Humans Converging on Transhumanism” white paper, IEEE Senior Member Roberto Saracco describes how the transition is taking place through increased intelligence of machines, improved communications methods, and technologies that are being used to augment humans.

Saracco is co-chair of the IEEE Digital Reality Initiative, which seeks to advance artificial intelligence, augmented reality, machine learning, smart sensors, virtual reality, and related technologies.

“Through the transhumanism, cyberspace and physical reality will be our perceived reality,” he says. “Human augmentation and machine augmentation are converging, creating a new symbiotic creature.”

Here are highlights from the white paper, which was released in June.

DIGITAL TWINS
One phenomenon that is driving a transformation of the intersection between humans and technology is the creation of digital twins: virtual models of objects, processes, and large systems. Digital twins can be created from anything physical that is wired for data with sensors, including you and me.

As sensor coverage and analytics have improved, digital twins are able to better match the characteristics and behavior of their real-world counterpart, in many cases responding to changes as the real thing would.

Think of them as clones. Every time the original version gets updated, so does the twin. A high-fidelity twin can remain in sync with the physical entity as sensors detect changes. A good digital twin has the ability to shadow the evolution of its physical twin.

Saracco envisions physical and digital twins becoming even more closely linked. “The boundary between humans and machines will become fuzzier and fuzzier, leading to a fusion between a digital twin and its physical twin,” he writes. “The resulting reality, the one that we will perceive, will exist partially in the physical world and partly in cyberspace.”

SMARTER MACHINES
Saracco says there are four types of intelligence that can make machines smarter by taking advantage of digital twins: embedded, shared, collective, and emerging. He uses self-driving cars to explain his point.

Embedded processing capabilities can enhance both the physical twin as well as its digital counterpart. For self-driving cars, that can include providing the vehicle with awareness and understanding of its surroundings.

Intelligence also can be pooled. Thanks to sensors that allow vehicle-to-vehicle communication among autonomous cars, the vehicles will be able to navigate more safely because they will share information with each other, Saracco says.

As with bee colonies, the links and interactions between individuals can give rise to emerging intelligence. For autonomous cars, Saracco says, each one might follow a few basic rules of the road, but as a result of their mutually influencing behavior, they could ultimately improve the flow of traffic as an ensemble.

Machines will gain awareness by learning how to deal with situations, Saracco says, adding that eventually they also will be able to predict how the situations will evolve. For example, he says, some machines could sense the emotional states of people in a crowd by observing their faces and behavior.

“In this sense,” he writes, “we can say that a machine can read our mind.”

As robots and other machines become more software-driven, they will be able to adapt to situations and change their behavior accordingly, he says.

When robots eventually reach a higher level of intelligence and start to understand how humans react to specific situations, the machines will begin to influence their surroundings in a way that’s most beneficial to the robots, he predicts.

AUGMENTING HUMANS
People today are enhancing their physical performance with exoskeletons and smart glasses, and Saracco envisions further extensions of our physical and mental abilities.

Future advances in prosthetics, for example, will enhance specific features, like heightening our senses, he predicts.

“Smart prosthetics are becoming so seamless that they are no longer considered artificial parts,” he says. “The brain includes them in the body map.”

Medical implants that monitor our health will become more common as medicine is customized to the patient, he says, and eventually it will be difficult for people to live without the implants.

“Having an implant seamlessly connecting to cyberspace may become a competitive advantage, quickly leading to mass adoption,” Saracco says. “Our cognitive space will extend into cyberspace through a medical implant, a continuum where it will be impossible to separate the cognitive self from the extended self. In the coming decade, the relationship between cyberspace will become seamless, a sort of sixth sense.”

Ethical concerns are likely to arise, he says, adding that it’s important to consider the consequences.


This article appeared first at IEEE Spectrum | The Institute on Oct 3, 2019.


No Comments

The Ultimate Optimization Problem: How to Best Use Every Square Meter of the Earth’s Surface

The Ultimate Optimization Problem: How to Best Use Every Square Meter of the Earth’s Surface

Posted Friday, Sep 27, 2019 by Jeff Safire

Lucas Joppa, founder of Microsoft’s AI for Earth program, is taking an engineering approach to environmental issues

By Eliza Strickland, IEEE Spectrum

Ultimate Optimization           Illustration: iStockphoto

Lucas Joppa thinks big. Even while gazing down into his cup of tea in his modest office on Microsoft’s campus in Redmond, Washington, he seems to see the entire planet bobbing in there like a spherical tea bag.

As Microsoft’s first chief environmental officer, Joppa came up with the company’s AI for Earth program, a five-year effort that’s spending US $50 million on AI-powered solutions to global environmental challenges.

The program is not just about specific deliverables, though. It’s also about mindset, Joppa told IEEE Spectrum in an interview in July. “It’s a plea for people to think about the Earth in the same way they think about the technologies they’re developing,” he says. “You start with an objective. So what’s our objective function for Earth?” (In computer science, an objective function describes the parameter or parameters you are trying to maximize or minimize for optimal results.)

AI for Earth launched in December 2017, and Joppa’s team has since given grants to more than 400 organizations around the world. In addition to receiving funding, some grantees get help from Microsoft’s data scientists and access to the company’s computing resources.

In a wide-ranging interview about the program, Joppa described his vision of the “ultimate optimization problem”—figuring out which parts of the planet should be used for farming, cities, wilderness reserves, energy production, and so on.

Every square meter of land and water on Earth has an infinite number of possible utility functions. It’s the job of Homo sapiens to describe our overall objective for the Earth. Then it’s the job of computers to produce optimization results that are aligned with the human-defined objective.

I don’t think we’re close at all to being able to do this. I think we’re closer from a technology perspective—being able to run the model—than we are from a social perspective—being able to make decisions about what the objective should be. What do we want to do with the Earth’s surface?

Such questions are increasingly urgent, as climate change has already begun reshaping our planet and our societies. Global sea and air surface temperatures have already risen by an average of 1 degree Celsius above preindustrial levels, according to the Intergovernmental Panel on Climate Change.

Today, people all around the world participated in a “climate strike,” with young people leading the charge and demanding a global transition to renewable energy. On Monday, world leaders will gather in New York for the United Nations Climate Action Summit, where they’re expected to present plans to limit warming to 1.5 degrees Celsius.

Joppa says such summit discussions should aim for a truly holistic solution.

We talk about how to solve climate change. There’s a higher-order question for society: What climate do we want? What output from nature do we want and desire? If we could agree on those things, we could put systems in place for optimizing our environment accordingly. Instead we have this scattered approach, where we try for local optimization. But the sum of local optimizations is never a global optimization.

There’s increasing interest in using artificial intelligence to tackle global environmental problems. New sensing technologies enable scientists to collect unprecedented amounts of data about the planet and its denizens, and AI tools are becoming vital for interpreting all that data.

The 2018 report “Harnessing AI for the Earth,” produced by the World Economic Forum and the consulting company PwC, discusses ways that AI can be used to address six of the world’s most pressing environmental challenges (climate change, biodiversity, and healthy oceans, water security, clean air, and disaster resilience).

Many of the proposed applications involve better monitoring of human and natural systems, as well as modeling applications that would enable better predictions and more efficient use of natural resources.

Joppa says that AI for Earth is taking a two-pronged approach, funding efforts to collect and interpret vast amounts of data alongside efforts that use that data to help humans make better decisions. And that’s where the global optimization engine would really come in handy.

For any location on earth, you should be able to go and ask: What’s there, how much is there, and how is it changing? And more importantly: What should be there?

On land, the data is really only interesting for the first few hundred feet. Whereas in the ocean, the depth dimension is really important.

We need a planet with sensors, with roving agents, with remote sensing. Otherwise our decisions aren’t going to be any good.

AI for Earth isn’t going to create such an online portal within five years, Joppa stresses. But he hopes the projects that he’s funding will contribute to making such a portal possible—eventually.

We’re asking ourselves: What are the fundamental missing layers in the tech stack that would allow people to build a global optimization engine? Some of them are clear, some are still opaque to me.

By the end of five years, I’d like to have identified these missing layers, and have at least one example of each of the components.

Some of the projects that AI for Earth has funded seem to fit that desire. Examples include SilviaTerra, which used satellite imagery and AI to create a map of the 92 billion trees in forested areas across the United States. There’s also OceanMind, a non-profit that detects illegal fishing and helps marine authorities enforce compliance. Platforms like Wildbook and iNaturalist enable citizen scientists to upload pictures of animals and plants, aiding conservation efforts and research on biodiversity. And FarmBeats aims to enable data-driven agriculture with low-cost sensors, drones, and cloud services.

It’s not impossible to imagine putting such services together into an optimization engine that knows everything about the land, the water, and the creatures who live on planet Earth. Then we’ll just have to tell that engine what we want to do about it.

Editor’s note: This story is published in cooperation with more than 250 media organizations and independent journalists that have focused their coverage on climate change ahead of the UN Climate Action Summit. IEEE Spectrum’s participation in the Covering Climate Now partnership builds on our past reporting about this global issue.

This article appeared at IEEE Spectrum on Sep. 20, 2019.

 


No Comments

Keysight extends beyond 5G with participation in 6G flagship program

Keysight extends beyond 5G with participation in 6G flagship program

Posted Friday, Aug 23, 2019 by Jeff Safire

The next generation of wireless communications is expected to leverage spectrum above millimeter waves

by Monica Alleven

6G-FierceWireless

While the wireless industry is firmly entrenched in deploying 5G networks—and in many cases, busy debunking myths surrounding it—plenty of academics and others are exploring what’s going to happen after 5G.

Test and measurement company Keysight Technologies, which has been there throughout the 5G standards process and even before that, recently announced that it has joined the multi-party 6G Flagship Program supported by the Academy of Finland and led by the University of Oulu, Finland.

Keysight actually has had a long relationship with Oulu University and has an R&D team based in Oulu, according to Roger Nichols, 5G program manager at Keysight, so it’s not as if this is coming out of the blue. According to a press release, however, Keysight is the only test and measurement provider thus far invited to take part in the program.

Keysight said its early research capability, complemented by a range of software and hardware for design, simulation and validation, will help the program accomplish its overarching goals. Those goals include supporting the industry in finalizing the adoption of 5G across verticals, developing fundamental technologies needed to enable 6G such as artificial intelligence (AI) and intelligent UX, and speeding digitalization of society.

The next generation of wireless communications is expected to leverage spectrum above millimeter waves. The terahertz waves, from 300 GHz to 3 THz, form an important component in delivering data rates of up to one terabit per second and ultra-low latencies, but they are still very much in the experimental territory.

“A lot of what’s happening up there now is still in the research phase because as you can imagine in those higher frequencies, it’s challenging to get things to work the way you want them,” Nichols told FierceWirelessTech. “We’ve been involved in that territory for quite a while,” having sub-100 GHz capability in its equipment for decades and using third-parties to extend that up into the terahertz range.

It’s not just about higher frequencies but what can be done with the wide bandwidths. For the sake of 6G, “really this is about: can we get an even wider bandwidth to deal with new applications that we haven’t thought about that have a demand for data rates that are well beyond anything we’re considering for 5G?” he said. “Obviously, going to terahertz super wide bandwidth is only part of 6G, just like millimeter wave is only part of 5G.”

Nichols points to an ITU Network 2030 white paper that describes the Network 2030 initiative and provides a comprehensive analysis of the applications, network, and infrastructure envisioned for the next big wireless transformation. That paper points to holographic type communications, multi-sense networks, time-engineered applications and critical infrastructure as emerging applications or use cases.

But nobody is suggesting it’s a good idea to get ahead of themselves. Part of Keysight’s success in 5G was getting involved early and knowing where that technology was headed and the tools that are needed, plus developing relationships with academia and industry. “Clearly, we’re going to spend our time ensuring that we stay on top of that business opportunity, which is far from being over,” he said.

 
This article appeared first at FierceWireless.com on 23.Aug.2019.

 


No Comments

Why self-driving car companies are spilling their secrets

Why self-driving car companies are spilling their secrets

Posted Wednesday, Aug 21, 2019 by Jeff Safire

Self-driving technology is hard — so hard that even the industry front-runner is showing its cards to try to get more brainpower on the problem.

by Joann Muller for Axios

Self-driving-cars-secrets-Axios
Illustration: Lazaro Gamio/Axios

Driving the news: Waymo announced Wednesday it’s sharing what is believed to be one of the largest troves of self-driving vehicle data ever released in the hope of accelerating the development of automated vehicle technology.

“The more smart brains you can get working on the problem, whether inside or outside the company, the better,” says Waymo principal scientist Drago Anguelov.

Why it matters: Data is a critical ingredient for machine learning, which is why until recently, companies developing automated driving systems viewed their testing data as a closely guarded asset.

But there’s now a growing consensus that sharing that information publicly could help get self-driving cars on the road faster.

What’s happening: The idea is to eliminate what has been a major roadblock for academia — a lack of relevant research data.

Aptiv, Argo and Lyft have released maps and images collected via cameras and lidar sensors.
Now, even Waymo — the market leader, with more than 10 million autonomous test miles — is opening up its digital vault.

Context: On any given day, an AV can collect more than 4 terabytes of raw sensor data, but not all of that is useful, Navigant Research analyst Sam Abuelsamid writes in Forbes.

During testing, a safety driver typically oversees the vehicle’s operation, while an engineer with a laptop in the passenger seat makes a notation of interesting encounters or challenging scenarios.

At the end of the day, all the sensor data from the vehicle is downloaded. The “good stuff,” as Abuelsamid calls it — encounters with pedestrians, cyclists, animals, traffic signals and more — is analyzed and labeled.

It’s a labor-intensive process, as the New York Times described in a story this week.

Humans — lots and lots of humans, NYT notes — must label and annotate all the data by hand so the AI system can understand what it’s “seeing” before it can begin learning.

People pore over images of street scenes, drawing digital boxes around and adding labels to things that are important to know, like: This is a pedestrian, a stroller, a double yellow line.

Read full article…

This article appeared first at Axios.com on 21.Aug.2019.


No Comments

Security researchers: DSLR cameras vulnerable ransomware attack

Security researchers: DSLR cameras vulnerable ransomware attack

Posted Monday, Aug 12, 2019 by Jeff Safire

Canon has issued a security advisory and firmware patch for the vulnerability

By Andrew Liptak | August 11, 2019

Ransomware has become a major threat to computer systems in recent years, as high-profile attacks have locked users out of personal computers, hospitals, city governments, and even The Weather Channel. Now, security researchers have discovered that another device that might be at risk: a DSLR camera.

Check Point Software Technologies issued a report today that detailed how its security researchers were able to remotely install malware on a digital DSLR camera. In it, researcher Eyal Itkin found that a hacker can easily plant malware on a digital camera. He says that the standardized Picture Transfer Protocol is an ideal method for delivering malware: it’s unauthenticated and can be used with both Wi-Fi and USB. The report notes that individual with an infected Wi-Fi access point could deploy it at a tourist destination to pull off an attack, or infect a user’s PC.

In a video, Itkin shows off how he were able to exploit a Canon E0S 80D over Wi-Fi and encrypt the images on the SD card so that the user wouldn’t be able to access them. He also notes that cameras could be a particularly juicy target for hackers: they’re full of personal images that most people likely won’t want to walk away from. In a real ransomware attack, a hacker will typically demand a small amount of money in exchange for the key that will decrypt the files — usually a small enough amount that people would rather just pay to get rid of the inconvenience.

Check Point says that it disclosed the vulnerability to Canon back in March, and the two began work in May to develop a patch. Last week, Canon issued a security advisory, telling people to avoid using unsecured Wi-Fi networks, to turn off its network functions when it’s not being used, and to update and install a new security patch onto the camera itself. Itkin says that he only worked with a Canon device, but tells The Verge that “due to the complexity of the protocol, we do believe that other vendors might be vulnerable as well, however it depends on their respective implementation.”

This article first appeared at The Verge on Aug 11, 2019.

 


No Comments

Applied Materials’ New Memory Machines

Applied Materials’ New Memory Machines

Posted Wednesday, Jul 10, 2019 by Jeff Safire

Tools designed to rapidly build embedded MRAM, RRAM, and phase change memories on logic chips expand foundry options

By Samuel K. Moore

Applied Materials
Applied Materials’ Endura Impulse uses nine physical vapor deposition systems to rapidly build RRAM or PCRAM. Photo: Applied Materials.

Chip equipment giant Applied Materials wants foundry companies to know that it feels their pain. Continuing down the traditional Moore’s Law path of increasing the density of transistors on a chip is too expensive for all but the three richest players—Intel, Samsung, and TSMC. So to keep the customers coming, other foundries can instead add new features, such as the ability to embed new non-volatile memories—RRAM, phase change memory, and MRAM—right on the processor. The trouble is, those are really hard things to make at scale. So Applied has invented a pair of machines that boost throughput by more than an order of magnitude. It unveiled the machines on 9 July at Semicon West, in San Francisco.

Building embedded spin-torque transfer MRAM—a two-terminal device that stores data in the materials’ magnetic orientation—is a particularly difficult task. “MRAM is a complex stack,” says Kevin Moraes, vice president of metal deposition at Applied Materials. Each cell has “30-plus layers and 10-plus materials. Some are only a couple angstroms thick; even fractional variation can have a strong effect.”

Building it requires many passes through physical vapor deposition (PVD) tools. Hard drive manufacturers have had the ability to make such structures for their read heads, but the volume of those devices is so low that they could afford to use a low-yield, and therefore expensive, process, says Moraes.

But when nonvolatile memory is just one part of a larger—and potentially expensive—piece of logic, you need a high-yield and high-throughput process. Inside a single vacuum system, Applied Materials’ new Endura Clover system integrates nine PVD tools, each of which can deposit five different materials. The system also includes an atomic-precision metrology unit, so the thicknesses of the deposited materials can be measured without having to leave the vacuum environment.

Phase change RAM (PCRAM) and resistive RAM are somewhat simpler to construct than MRAM. PCRAM stores its bit as the crystal state of material, which is resistive in its amorphous state and more conductive in its crystalline state. RRAM’s information is also stored as a resistance, but it changes according to a conductive bridge that forms through an otherwise resistive material. The Endura Impluse, Applied’s solution for those two memory technologies, is also a nine-PVD machine with integrated metrology.

Moraes indicated that certain customers already were using the two tools. Though he wouldn’t name them, it seems likely that GlobalFoundries is among them. Diversifying its offerings to include embedded nonvolatile memories was a key strategy for the company when its leadership decided to abandon the manufacturing nodes at 7-nanometers and below. The company began offering embedded MRAM in 2018. The company already manufactures chips for stand-alone MRAM producer Everspin, which recently began pilot production of a 1-gigabit chip. GlobalFoundries isn’t alone among the majors in embedding MRAM, of course. TSMC, Intel, and Samsung have also developed it.

Embedded nonvolatile memories are also key to some kinds of neuromorphic and deep learning accelerators. For neuromorphic chips, the memory cells encode the values of the neural network’s synapses. In deep learning, they form the backbone of the multiply and accumulate circuits that are the basis for AI inferencing. “IBM has been spearheading R&D of new memories for many years, and we see the need for these technologies increasing as the AI era demands improvements in chip performance and efficiency,” Mukesh Khare, vice president for semiconductors and AI hardware and systems at IBM Research said in a press release.

This article first published at IEEE Spectrum on Jul 10, 2019.


No Comments