Tag Archives: artificial

#434182 Why AI robot toys could be good for kids

A new generation of robot toys with personalities powered by artificial intelligence could give kids more than just a holiday plaything, according to a University of Alberta researcher. Continue reading

Posted in Human Robots

#434151 Life-or-Death Algorithms: The Black Box ...

When it comes to applications for machine learning, few can be more widely hyped than medicine. This is hardly surprising: it’s a huge industry that generates a phenomenal amount of data and revenue, where technological advances can improve or save the lives of millions of people. Hardly a week passes without a study that suggests algorithms will soon be better than experts at detecting pneumonia, or Alzheimer’s—diseases in complex organs ranging from the eye to the heart.

The problems of overcrowded hospitals and overworked medical staff plague public healthcare systems like Britain’s NHS and lead to rising costs for private healthcare systems. Here, again, algorithms offer a tantalizing solution. How many of those doctor’s visits really need to happen? How many could be replaced by an interaction with an intelligent chatbot—especially if it can be combined with portable diagnostic tests, utilizing the latest in biotechnology? That way, unnecessary visits could be reduced, and patients could be diagnosed and referred to specialists more quickly without waiting for an initial consultation.

As ever with artificial intelligence algorithms, the aim is not to replace doctors, but to give them tools to reduce the mundane or repetitive parts of the job. With an AI that can examine thousands of scans in a minute, the “dull drudgery” is left to machines, and the doctors are freed to concentrate on the parts of the job that require more complex, subtle, experience-based judgement of the best treatments and the needs of the patient.

High Stakes
But, as ever with AI algorithms, there are risks involved with relying on them—even for tasks that are considered mundane. The problems of black-box algorithms that make inexplicable decisions are bad enough when you’re trying to understand why that automated hiring chatbot was unimpressed by your job interview performance. In a healthcare context, where the decisions made could mean life or death, the consequences of algorithmic failure could be grave.

A new paper in Science Translational Medicine, by Nicholson Price, explores some of the promises and pitfalls of using these algorithms in the data-rich medical environment.

Neural networks excel at churning through vast quantities of training data and making connections, absorbing the underlying patterns or logic for the system in hidden layers of linear algebra; whether it’s detecting skin cancer from photographs or learning to write in pseudo-Shakespearean script. They are terrible, however, at explaining the underlying logic behind the relationships that they’ve found: there is often little more than a string of numbers, the statistical “weights” between the layers. They struggle to distinguish between correlation and causation.

This raises interesting dilemmas for healthcare providers. The dream of big data in medicine is to feed a neural network on “huge troves of health data, finding complex, implicit relationships and making individualized assessments for patients.” What if, inevitably, such an algorithm proves to be unreasonably effective at diagnosing a medical condition or prescribing a treatment, but you have no scientific understanding of how this link actually works?

Too Many Threads to Unravel?
The statistical models that underlie such neural networks often assume that variables are independent of each other, but in a complex, interacting system like the human body, this is not always the case.

In some ways, this is a familiar concept in medical science—there are many phenomena and links which have been observed for decades but are still poorly understood on a biological level. Paracetamol is one of the most commonly-prescribed painkillers, but there’s still robust debate about how it actually works. Medical practitioners may be keen to deploy whatever tool is most effective, regardless of whether it’s based on a deeper scientific understanding. Fans of the Copenhagen interpretation of quantum mechanics might spin this as “Shut up and medicate!”

But as in that field, there’s a debate to be had about whether this approach risks losing sight of a deeper understanding that will ultimately prove more fruitful—for example, for drug discovery.

Away from the philosophical weeds, there are more practical problems: if you don’t understand how a black-box medical algorithm is operating, how should you approach the issues of clinical trials and regulation?

Price points out that, in the US, the “21st-Century Cures Act” allows the FDA to regulate any algorithm that analyzes images, or doesn’t allow a provider to review the basis for its conclusions: this could completely exclude “black-box” algorithms of the kind described above from use.

Transparency about how the algorithm functions—the data it looks at, and the thresholds for drawing conclusions or providing medical advice—may be required, but could also conflict with the profit motive and the desire for secrecy in healthcare startups.

One solution might be to screen algorithms that can’t explain themselves, or don’t rely on well-understood medical science, from use before they enter the healthcare market. But this could prevent people from reaping the benefits that they can provide.

Evaluating Algorithms
New healthcare algorithms will be unable to do what physicists did with quantum mechanics, and point to a track record of success, because they will not have been deployed in the field. And, as Price notes, many algorithms will improve as they’re deployed in the field for a greater amount of time, and can harvest and learn from the performance data that’s actually used. So how can we choose between the most promising approaches?

Creating a standardized clinical trial and validation system that’s equally valid across algorithms that function in different ways, or use different input or training data, will be a difficult task. Clinical trials that rely on small sample sizes, such as for algorithms that attempt to personalize treatment to individuals, will also prove difficult. With a small sample size and little scientific understanding, it’s hard to tell whether the algorithm succeeded or failed because it’s bad at its job or by chance.

Add learning into the mix and the picture gets more complex. “Perhaps more importantly, to the extent that an ideal black-box algorithm is plastic and frequently updated, the clinical trial validation model breaks down further, because the model depends on a static product subject to stable validation.” As Price describes, the current system for testing and validation of medical products needs some adaptation to deal with this new software before it can successfully test and validate the new algorithms.

Striking a Balance
The story in healthcare reflects the AI story in so many other fields, and the complexities involved perhaps illustrate why even an illustrious company like IBM appears to be struggling to turn its famed Watson AI into a viable product in the healthcare space.

A balance must be struck, both in our rush to exploit big data and the eerie power of neural networks, and to automate thinking. We must be aware of the biases and flaws of this approach to problem-solving: to realize that it is not a foolproof panacea.

But we also need to embrace these technologies where they can be a useful complement to the skills, insights, and deeper understanding that humans can provide. Much like a neural network, our industries need to train themselves to enhance this cooperation in the future.

Image Credit: Connect world / Shutterstock.com Continue reading

Posted in Human Robots

#433954 The Next Great Leap Forward? Combining ...

The Internet of Things is a popular vision of objects with internet connections sending information back and forth to make our lives easier and more comfortable. It’s emerging in our homes, through everything from voice-controlled speakers to smart temperature sensors. To improve our fitness, smart watches and Fitbits are telling online apps how much we’re moving around. And across entire cities, interconnected devices are doing everything from increasing the efficiency of transport to flood detection.

In parallel, robots are steadily moving outside the confines of factory lines. They’re starting to appear as guides in shopping malls and cruise ships, for instance. As prices fall and the artificial intelligence (AI) and mechanical technology continues to improve, we will get more and more used to them making independent decisions in our homes, streets and workplaces.

Here lies a major opportunity. Robots become considerably more capable with internet connections. There is a growing view that the next evolution of the Internet of Things will be to incorporate them into the network, opening up thrilling possibilities along the way.

Home Improvements
Even simple robots become useful when connected to the internet—getting updates about their environment from sensors, say, or learning about their users’ whereabouts and the status of appliances in the vicinity. This lets them lend their bodies, eyes, and ears to give an otherwise impersonal smart environment a user-friendly persona. This can be particularly helpful for people at home who are older or have disabilities.

We recently unveiled a futuristic apartment at Heriot-Watt University to work on such possibilities. One of a few such test sites around the EU, our whole focus is around people with special needs—and how robots can help them by interacting with connected devices in a smart home.

Suppose a doorbell rings that has smart video features. A robot could find the person in the home by accessing their location via sensors, then tell them who is at the door and why. Or it could help make video calls to family members or a professional carer—including allowing them to make virtual visits by acting as a telepresence platform.

Equally, it could offer protection. It could inform them the oven has been left on, for example—phones or tablets are less reliable for such tasks because they can be misplaced or not heard.

Similarly, the robot could raise the alarm if its user appears to be in difficulty.Of course, voice-assistant devices like Alexa or Google Home can offer some of the same services. But robots are far better at moving, sensing and interacting with their environment. They can also engage their users by pointing at objects or acting more naturally, using gestures or facial expressions. These “social abilities” create bonds which are crucially important for making users more accepting of the support and making it more effective.

To help incentivize the various EU test sites, our apartment also hosts the likes of the European Robotic League Service Robot Competition—a sort of Champions League for robots geared to special needs in the home. This brought academics from around Europe to our laboratory for the first time in January this year. Their robots were tested in tasks like welcoming visitors to the home, turning the oven off, and fetching objects for their users; and a German team from Koblenz University won with a robot called Lisa.

Robots Offshore
There are comparable opportunities in the business world. Oil and gas companies are looking at the Internet of Things, for example; experimenting with wireless sensors to collect information such as temperature, pressure, and corrosion levels to detect and possibly predict faults in their offshore equipment.

In the future, robots could be alerted to problem areas by sensors to go and check the integrity of pipes and wells, and to make sure they are operating as efficiently and safely as possible. Or they could place sensors in parts of offshore equipment that are hard to reach, or help to calibrate them or replace their batteries.

The likes of the ORCA Hub, a £36m project led by the Edinburgh Centre for Robotics, bringing together leading experts and over 30 industry partners, is developing such systems. The aim is to reduce the costs and the risks of humans working in remote hazardous locations.

ORCA tests a drone robot. ORCA
Working underwater is particularly challenging, since radio waves don’t move well under the sea. Underwater autonomous vehicles and sensors usually communicate using acoustic waves, which are many times slower (1,500 meters a second vs. 300m meters a second for radio waves). Acoustic communication devices are also much more expensive than those used above the water.

This academic project is developing a new generation of low-cost acoustic communication devices, and trying to make underwater sensor networks more efficient. It should help sensors and underwater autonomous vehicles to do more together in future—repair and maintenance work similar to what is already possible above the water, plus other benefits such as helping vehicles to communicate with one another over longer distances and tracking their location.

Beyond oil and gas, there is similar potential in sector after sector. There are equivalents in nuclear power, for instance, and in cleaning and maintaining the likes of bridges and buildings. My colleagues and I are also looking at possibilities in areas such as farming, manufacturing, logistics, and waste.

First, however, the research sectors around the Internet of Things and robotics need to properly share their knowledge and expertise. They are often isolated from one another in different academic fields. There needs to be more effort to create a joint community, such as the dedicated workshops for such collaboration that we organized at the European Robotics Forum and the IoT Week in 2017.

To the same end, industry and universities need to look at setting up joint research projects. It is particularly important to address safety and security issues—hackers taking control of a robot and using it to spy or cause damage, for example. Such issues could make customers wary and ruin a market opportunity.

We also need systems that can work together, rather than in isolated applications. That way, new and more useful services can be quickly and effectively introduced with no disruption to existing ones. If we can solve such problems and unite robotics and the Internet of Things, it genuinely has the potential to change the world.

Mauro Dragone, Assistant Professor, Cognitive Robotics, Multiagent systems, Internet of Things, Heriot-Watt University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Willyam Bradberry/Shutterstock.com Continue reading

Posted in Human Robots

#433953 The Future of Artificial Intelligence

Believe it or not- we all are leveraging artificial intelligence like technology in our everyday life. Whether it is about locating the nearby barber shop to a simple order we placed over e-commerce marketplace- AI is somehow involved in it. A report by Statista reveals that the overall AI market will reach 7,35 billion USD …

The post The Future of Artificial Intelligence appeared first on TFOT. Continue reading

Posted in Human Robots

#433950 How the Spatial Web Will Transform Every ...

What is the future of work? Is our future one of ‘technological socialism’ (where technology is taking care of our needs)? Or is our future workplace completely virtualized, whereby we hang out at home in our PJs while walking about our virtual corporate headquarters?

This blog will look at the future of work during the age of Web 3.0… Examining scenarios in which AI, VR, and the spatial web converge to transform every element of our careers, from training to execution to free time.

Three weeks ago, I explored the vast implications of Web 3.0 on news, media, smart advertising, and personalized retail. And to offer a quick recap on what the Spatial Web is and how it works, let’s cover some brief history.

A Quick Recap on Web 3.0
While Web 1.0 consisted of static documents and read-only data (static web pages), Web 2.0 introduced multimedia content, interactive web applications, and participatory social media, all of these mediated by two-dimensional screens.

But over the next two to five years, the convergence of 5G, artificial intelligence, VR/AR, and a trillion-sensor economy will enable us to both map our physical world into virtual space and superimpose a digital data layer onto our physical environments.

Suddenly, all our information will be manipulated, stored, understood, and experienced in spatial ways.

In this third installment of the Web 3.0 series, I’ll be discussing the Spatial Web’s vast implications for:

Professional Training
Delocalized Business and the Virtual Workplace
Smart Permissions and Data Security

Let’s dive in.

Virtual Training, Real-World Results
Virtual and augmented reality have already begun disrupting the professional training market.

Leading the charge, Walmart has already implemented VR across 200 Academy training centers, running over 45 modules and simulating everything from unusual customer requests to a Black Friday shopping rush.

In September 2018, Walmart committed to a 17,000-headset order of the Oculus Go to equip every US Supercenter, neighborhood market, and discount store with VR-based employee training.

In the engineering world, Bell Helicopter is using VR to massively expedite development and testing of its latest aircraft, FCX-001. Partnering with Sector 5 Digital and HTC VIVE, Bell found it could concentrate a typical six-year aircraft design process into the course of six months, turning physical mock-ups into CAD-designed virtual replicas.

But beyond the design process itself, Bell is now one of a slew of companies pioneering VR pilot tests and simulations with real-world accuracy. Seated in a true-to-life virtual cockpit, pilots have now tested countless iterations of the FCX-001 in virtual flight, drawing directly onto the 3D model and enacting aircraft modifications in real-time.

And in an expansion of our virtual senses, several key players are already working on haptic feedback. In the case of VR flight, French company Go Touch VR is now partnering with software developer FlyInside on fingertip-mounted haptic tech for aviation.

Dramatically reducing time and trouble required for VR-testing pilots, they aim to give touch-based confirmation of every switch and dial activated on virtual flights, just as one would experience in a full-sized cockpit mockup. Replicating texture, stiffness, and even the sensation of holding an object, these piloted devices contain a suite of actuators to simulate everything from a light touch to higher-pressured contact, all controlled by gaze and finger movements.

When it comes to other high-risk simulations, virtual and augmented reality have barely scratched the surface.

Firefighters can now combat virtual wildfires with new platforms like FLAIM Trainer or TargetSolutions. And thanks to the expansion of medical AR/VR services like 3D4Medical or Echopixel, surgeons might soon perform operations on annotated organs and magnified incision sites, speeding up reaction times and vastly improving precision.

But perhaps most urgent, Web 3.0 and its VR interface will offer an immediate solution for today’s constant industry turnover and large-scale re-education demands.

VR educational facilities with exact replicas of anything from large industrial equipment to minute circuitry will soon give anyone a second chance at the 21st-century job market.

Want to be an electric, autonomous vehicle mechanic at age 15? Throw on a demonetized VR module and learn by doing, testing your prototype iterations at almost zero cost and with no risk of harming others.

Want to be a plasma physicist and play around with a virtual nuclear fusion reactor? Now you’ll be able to simulate results and test out different tweaks, logging Smart Educational Record credits in the process.

As tomorrow’s career model shifts from a “one-and-done graduate degree” to lifelong education, professional VR-based re-education will allow for a continuous education loop, reducing the barrier to entry for anyone wanting to enter a new industry.

But beyond professional training and virtually enriched, real-world work scenarios, Web 3.0 promises entirely virtual workplaces and blockchain-secured authorization systems.

Rise of the Virtual Workplace and Digital Data Integrity
In addition to enabling an annual $52 billion virtual goods marketplace, the Spatial Web is also giving way to “virtual company headquarters” and completely virtualized companies, where employees can work from home or any place on the planet.

Too good to be true? Check out an incredible publicly listed company called eXp Realty.

Launched on the heels of the 2008 financial crisis, eXp Realty beat the odds, going public this past May and surpassing a $1B market cap on day one of trading.

But how? Opting for a demonetized virtual model, eXp’s founder Glenn Sanford decided to ditch brick and mortar from the get-go, instead building out an online virtual campus for employees, contractors, and thousands of agents.

And after years of hosting team meetings, training seminars, and even agent discussions with potential buyers through 2D digital interfaces, eXp’s virtual headquarters went spatial.

What is eXp’s primary corporate value? FUN! And Glenn Sanford’s employees love their jobs.

In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent.

Foregoing any physical locations for a centralized VR campus, eXp Realty has essentially thrown out all overhead and entered a lucrative market with barely any upfront costs.

Delocalize with VR, and you can now hire anyone with internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.

Throw in the Spatial Web’s fundamental blockchain-based data layer, and now cryptographically secured virtual IDs will let you validate colleagues’ identities or any of the virtual avatars we will soon inhabit.

This becomes critically important for spatial information logs—keeping incorruptible records of who’s present at a meeting, which data each person has access to, and AI-translated reports of everything discussed and contracts agreed to.

But as I discussed in a previous Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high rises too.

As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.

Imaging showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.

You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.

With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.

Final Thoughts
While converging technologies slash the lifespan of Fortune 500 companies, bring on the rise of vast new industries, and transform the job market, Web 3.0 is changing the way we work, where we work, and who we work with.

Life-like virtual modules are already unlocking countless professional training camps, modifiable in real-time and easily updated.

Virtual programming and blockchain-based authentication are enabling smart data logging, identity protection, and on-demand smart asset trading.

And VR/AR-accessible worlds (and corporate campuses) not only demonetize, dematerialize, and delocalize our everyday workplaces, but enrich our physical worlds with AI-driven, context-specific data.

Welcome to the Spatial Web workplace.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: MONOPOLY919 / Shutterstock.com Continue reading

Posted in Human Robots