Soldiers could mentally steer robotic quadrupeds thanks to a tiny sensor that was discretely inserted behind the ear.

In the future, soldiers may be able to connect with a variety of sensors, vehicles, and robots on the battlefield as the adversary tries to snoop on their radio transmissions thanks to a development that allows a human to control a robot simply by thinking.

Researchers from Australia working with the nation's Defense Department have just released a new study in the journal Applied Nano Materials that describes how a test participant guided a ground robot to waypoints by just seeing them using a Microsoft HoloLens.

With brain-computer interfaces, the U.S. military has had some remarkably successful outcomes. A brain chip created by the Defense Advanced Projects Research Agency, or DARPA, allowed a disabled lady to fly a simulated F-35 in 2015 using just brain impulses. However, such chips need surgical implantation. Additionally, gels are often needed for greater electrical conductivity in sensors that may be worn over the skin. Simply put, that is ineffective for soldiers wearing helmets.

The study states that because of the gel's progressive drying, "use of the gel contributes to skin irritation, risk of infection, hair fouling, allergic reaction, instability upon motion of the individual, and unsuitability for long-term operation."

"Up to this point, brain-computer interface (BCI) systems have only been successful in lab conditions, necessitating that the user wear intrusive or heavy wet sensors and stay still to reduce signal noise. Our dry sensors, in contrast, are simple to wear in conjunction with the BCI. As one of the paper's authors and a professor at the University of Technology Sydney, Chin-Teng Lin said to Defense One in an email, "They function in real-world contexts and users may walk around while utilizing the system.

The researchers' sensor made of graphene performs well when worn inside a helmet. The researchers added a Microsoft HoloLens to it. The occipital lobe of the wearer's brain would send a signal as they gazed about while wearing the HoloLens.

These signals were gathered by the sensor and passed through a tiny Raspberry Pi 4B computer, which used a formula known as steady-state visually evoked potential to translate the signals into instructions about a particular waypoint that corresponded to a spot. A Ghost Robotics Q-UGVs robot received such instructions and carried out the directives. 

Along with the academics, the Australian military tested the device before the study was released. They explain the fruitful experiment in a video that was uploaded a month ago to the Australian Army's YouTube channel. In a second demonstration, a commander gave orders for robots and firefighters to conduct a security search of a location. Soldiers kept an eye on the robot's using the HoloLens headgear, a visual feed.

Australian Army Lt. Col. Kate Tollenaar is heard in the film saying, "This is very much an idea of what might be possible in the future." We're very eager to collaborate with our stakeholders on use cases and to explore where the technology may go

What kind of errors? The Navy has pioneered in so many autonomy-related fields that it has also been the first to encounter several challenges. The considerably more ambitious X-47B effort to launch a strike fighter drone from an aircraft carrier gave birth to the MQ-25 program. The Navy finally decided that the X-47B prototypes were too costly and too stealthy, even though they performed well in testing. A September audit from the Government Accountability Office states that the big unmanned underwater vehicle program is $242 million over budget and three years behind schedule. 

Have you ever watched a baby gazelle start to walk? The equivalent of a mammalian daddy longlegs, a fawn scrambles to its feet, falls, stands, and falls once again. It eventually holds still long enough to thrash its toothpick-like legs into several nearly falling steps. Amazingly, the fawn is jumping around like an expert a few minutes after this cute demonstration.

Now that we have a robot, we can recreate this famous Serengeti scenario.

A robotic dog at the University of California, Berkeley is the fawn in this scenario. Additionally, it has an incredibly fast rate of learning (in comparison to other robots). The robot is unique in that it employs artificial intelligence to teach itself how to walk, unlike other flashier robots you may have seen online.

In an hour, the robot learns to flip itself over, stand up, and walk after starting on its back with its legs flailing. To educate it on how to tolerate and recuperate from being pushed around by its handlers, it just needs another ten minutes of harassment with a roll of cardboard.

It's not the first time a robot has learned to walk using artificial intelligence. The Berkeley bot, however, learned the skill totally in the real world, unlike earlier robots that did it through trial and error over many simulation rounds.

The researchers, Daniel Hafner, Alejandro Escontrela, and Philipp Wu, claim that transferring methods learned in simulation to the real world isn't simple in a study posted on the arXiv preprint service. Young robots might make mistakes due to little features and disparities between the real world and simulation. However, it is impractical to train algorithms in the actual world. It would require too much time and effort.

For instance, OpenAI demonstrated a robotic hand that could operate a cube four years ago. In a simulation running on 6,144 CPUs and 8 Nvidia V100 GPUs, the control algorithm, Dactyl, required almost 100 years of expertise to complete this rather easy task. Since then, progress has been made, but the issue mostly persists.

Pure reinforcement learning algorithms can't acquire skills in the actual world because they require too much trial and error. Simply said, learning would destroy researchers and machines before any real advancement could be made.

The Dreamer algorithm was used by the Berkeley team to attempt to resolve this issue. Dreamers can forecast the likelihood that future action will succeed by building a "world model." Its projections get more precise with practice. The world model makes it possible for the robot to more quickly identify what works by pre-filtering out less effective behaviors.

Robots can foresee the results of prospective actions thanks to learning world models from prior experience, which minimizes the amount of trial and error in the real world that is required.