Exponential Tech trends discussed in detail in this section, Interesting fact is the rate at which technology is accelerating is accelerating itself and that to at an exponential rate...
We humans have evolved a rich repertoire of communication, from gesture to sophisticated languages. All of these forms of communication link otherwise separate individuals in such a way that they can share and express their singular experiences and work together collaboratively. In a new study, technology replaces language as a means of communicating by directly linking the activity of human brains. Electrical activity from the brains of a pair of human subjects was transmitted to the brain of a third individual in the form of magnetic signals, which conveyed an instruction to perform a task in a particular manner. This study opens the door to extraordinary new means of human collaboration while, at the same time, blurring fundamental notions about individual identity and autonomy in disconcerting ways.
Direct brain-to-brain communication has been a subject of intense interest for many years, driven by motives as diverse as futurist enthusiasm and military exigency. In his book Beyond Boundaries one of the leaders in the field, Miguel Nicolelis, described the merging of human brain activity as the future of humanity, the next stage in our species’ evolution. (Nicolelis serves on Scientific American’s board of advisers.) He has already conducted a study in which he linked together the brains of several rats using complex implanted electrodes known as brain-to-brain interfaces. Nicolelis and his co-authors described this achievement as the first “organic computer” with living brains tethered together as if they were so many microprocessors. The animals in this network learned to synchronize the electrical activity of their nerve cells to the same extent as those in a single brain. The networked brains were tested for things such as their ability to discriminate between two different patterns of electrical stimuli, and they routinely outperformed individual animals.
If networked rat brains are “smarter” than a single animal, imagine the capabilities of a biological supercomputer of networked human brains. Such a network could enable people to work across language barriers. It could provide those whose ability to communicate is impaired with a new means of doing so. Moreover, if the rat study is correct, networking human brains might enhance performance. Could such a network be a faster, more efficient and smarter way of working together?
The new paper addressed some of these questions by linking together the brain activity of a small network of humans. Three individuals sitting in separate rooms collaborated to correctly orient a block so that it could fill a gap between other blocks in a video game. Two individuals who acted as “senders” could see the gap and knew whether the block needed to be rotated to fit. The third individual, who served as the “receiver,” was blinded to the correct answer and needed to rely on the instructions sent by the senders.
The two senders were equipped with electroencephalographs (EEGs) that recorded their brain’s electrical activity. Senders were able to see the orientation of the block and decide whether to signal the receiver to rotate it. They focused on a light flashing at a high frequency to convey the instruction to rotate or focused on one flashing at a low frequency to signal not to do so. The differences in the flashing frequencies caused disparate brain responses in the senders, which were captured by the EEGs and sent, via computer interface, to the receiver. A magnetic pulse was delivered to the receiver using a transcranial magnetic stimulation (TMS) device if a sender signaled to rotate. That magnetic pulse caused a flash of light (a phosphene) in the receiver’s visual field as a cue to turn the block. The absence of a signal within a discrete period of time was the instruction not to turn the block. After gathering instructions from both senders, the receiver decided whether to rotate the block. Like the senders, the receiver was equipped with an EEG, in this case to signal that choice to the computer. Once the receiver decided on the orientation of the block, the game concluded, and the results were given to all three participants. This provided the senders with a chance to evaluate the receiver’s actions and the receiver with a chance to assess the accuracy of each sender.
The team was then given a second chance to improve its performance. Overall, five groups of individuals were tested using this network, called the “BrainNet,” and, on average, they achieved greater than 80 percent accuracy in completing the task.
In order to escalate the challenge, investigators sometimes added noise to the signal sent by one of the senders. Faced with conflicting or ambiguous directions, the receivers quickly learned to identify and follow the instructions of the more accurate sender. This process emulated some of the features of “conventional” social networks, according to the report.
This study is a natural extension of work previously done in laboratory animals. In addition to the work linking together rat brains, Nicolelis’s laboratory is responsible for linking multiple primate brains into a “Brainet” (not to be confused with the BrainNet discussed above), in which the primates learned to cooperate in the performance of a common task via brain-computer interfaces (BCIs). This time, three primates were connected to the same computer with implanted BCIs and simultaneously tried to move a cursor to a target. The animals were not directly linked to each other in this case, and the challenge was for them to perform a feat of parallel processing, each directing its activity toward a goal while continuously compensating for the activity of the others.
Brain-to-brain interfaces also span across species, with humans using noninvasive methods similar to those in the BrainNet study to control cockroaches or rats that had surgically implanted brain interfaces. In one report, a human using a noninvasive brain interface linked, via computer, to the BCI of an anesthetized rat was able to move the animal’s tail. While in another study, a human controlled a rat as a freely moving cyborg.
The investigators in the new paper point out that it is the first report in which the brains of multiple humans have been linked in a completely noninvasive manner. They claim that the number of individuals whose brains could be networked is essentially unlimited. Yet the information being conveyed is currently very simple: a yes-or-no binary instruction. Other than being a very complex way to play a Tetris-like video game, where could these efforts lead?
The authors propose that information transfer using noninvasive approaches could be improved by simultaneously imaging brain activity using functional magnetic resonance imaging (fMRI) in order to increase the information a sender could transmit. But fMRI is not a simple procedure, and it would expand the complexity of an already extraordinarily complex approach to sharing information. The researchers also propose that TMS could be delivered, in a focused manner, to specific brain regions in order to elicit awareness of particular semantic content in the receiver’s brain.
Meanwhile the tools for more invasive—and perhaps more efficient—brain interfacing are developing rapidly. Elon Musk recently announced the development of a robotically implantable BCI containing 3,000 electrodes to provide extensive interaction between computers and nerve cells in the brain. While impressive in scope and sophistication, these efforts are dwarfed by government plans. The Defense Advanced Research Projects Agency (DARPA) has been leading engineering efforts to develop an implantable neural interface capable of engaging one million nerve cells simultaneously. While these BCIs are not being developed specifically for brain–to-brain interfacing, it is not difficult to imagine that they could be recruited for such purposes.
Even though the methods used here are noninvasive and therefore appear far less ominous than if a DARPA neural interface had been used, the technology still raises ethical concerns, particularly because the associated technologies are advancing so rapidly. For example, could some future embodiment of a brain-to-brain network enable a sender to have a coercive effect on a receiver, altering the latter’s sense of agency? Could a brain recording from a sender contain information that might someday be extracted and infringe on that person’s privacy? Could these efforts, at some point, compromise an individual’s sense of personhood?
This work takes us a step closer to the future Nicolelis imagined, in which, in the words of the late Nobel Prize–winning physicist Murray Gell-Man, “thoughts and feelings would be completely shared with none of the selectivity or deception that language permits.” In addition to being somewhat voyeuristic in this pursuit of complete openness, Nicolelis misses the point. One of the nuances of human language is that often what is not said is as important as what is. The content concealed in privacy of one’s mind is the core of individual autonomy. Whatever we stand to gain in collaboration or computing power by directly linking brains may come at the cost of things that are far more important.
One key benefit of 3D printing is the acceleration of product development. With AM’s capacity for rapid prototyping, companies are able to develop and validate their designs faster, thereby accelerating the design process and allowing enterprises to better react to emerging market opportunities.
Fast prototyping is of particular value in the oil and gas industry. It is often possible to employ agile 3D printing to shorten the development cycle of oil and gas components, thereby reducing the time it takes to proceed with full production. Rapid prototyping allows those in the oil and gas industry to engage in multiple design cycles and quickly test design concepts.
For example, GE Oil & Gas reduced its product testing and validation process by half when it used additive manufacturing to develop a new burner for the Nova LT16 Gas Turbine.
The oil and gas industry uses complex machinery that must meet robust performance and environmental standards. Additive manufacturing allows for innovative shapes and complex geometries that reduce the number of parts, thereby reducing assembly time, improving performance and improving emission reduction.
Traditionally manufactured components must be broken down into constituent parts to allow for proper post-processing. For example, to allow for the successful machining of internal surfaces, many components must be fabricated from two halves that are ultimately welded together. In contrast, 3D printing allows for single-part fabrication of flow control and other oil/gas devices.
Compared to investment casting, additive manufacturing allows for simplified manufacturing of pumps, turbo machinery, valves and other vital components can reduce costs and enhance performance. For example, the GE Oil & Gas Additive Manufacturing Laboratory in Florence, Italy, employs multiple direct metal laser melting (DMLM) machines to fabricate turbo machinery components.
The oil and gas industry requires many low-volume components that are relatively expensive to manufacture, stock and replace.
As the oil and gas industry evolves, improved designs lead to shorter production cycles, which only increase the pressure on those responsible for stocking spare parts. In oil and gas, parts availability is key even as the problem of parts obsolescence looms larger.
Oil and gas operators face significant logistical challenges, due in part to the wide geographical distribution of operations across continents and oceans. The high cost of downtime only accentuates the parts supply challenge. Since the timely delivery of high-quality parts for maintenance and repairs is vital, most operators strive to minimize unscheduled downtime by maintaining large inventories of critical spare parts. Traditionally, it has been more cost-effective to overstock parts than to deal with extended downtime.
Additive manufacturing optimizes asset performance in a variety of ways. Increasingly, industry principals, suppliers and maintenance providers all pursue faster repairs and improved design quality through additive manufacturing. AM reduces warehouse stocks through on-demand printing. The attendant savings have more impact given the historical volatility of oil prices. Downturns increase the pressure to minimize inventories of spare parts.
The adequate stocking of parts for legacy components presents multiple challenges. Overstocking consumes company resources, while under stocking may lead to crippling downtime. 3D printing is the perfect antidote to the ongoing challenge of sufficiently addressing the requirements of legacy installations. AM also addresses needs that stem from manufacturers that go out of business. In some instances, the quality of 3D-printed replacement parts exceeds the quality of the original ones.
Downtime in the oil and gas industry is very expensive, particularly on remote and/or offshore rigs. Unplanned downtime costs an average offshore operator $49 million per year, according to one estimate. For some offshore operators, annual costs are much higher.
Additive manufacturing limits downtime through reduced lead times and supply chain enhancements. As the pursuit of new reserves takes oil companies to more remote locations, the prospect of on-site manufacturing becomes even more alluring. Drilling for oil and gas reaches impressive depths measured in miles. Many components on drilling rigs include multiple parts that must be welded, bolted or brazed together. The 3D printing of single-piece designs at remote locations could significantly reduce costs and downtime.
When a single, critical component unexpectedly fails at a remote location, the costs associated with remedying the situation are often very high. Sometimes, a single crucial part must be flown many miles to the oil/gas installation. The key is to maintain the quality control standards of a factory at a drilling rig.
As 3D printing innovation appears in the oil and gas industry, the need for input from field workers is helpful. Successful 3D printing in oil and gas requires an initial stage of identifying which parts and components genuinely benefit from the considerable advantages of additive manufacturing.
Even as 3D printing injects efficiencies into parts acquisition and distribution, the AM process also raises legal and regulatory concerns. To maintain rigorous performance and safety standards, industry certification of 3D printing materials is a potential hurdle. It is challenging to complete the transition from using additive manufacturing for prototyping to using it in the efficient production of end-use parts, which must meet robust performance and safety standards.
Traditionally, manufacturers either owned intellectual property or licensed it, and they produced components at centralized facilities. Using digital data to print parts at disparate locations raises concerns over the proper use of intellectual property. One way to deal with intellectual property is to use the “iTunes” approach. Just as an artist licenses the right to download his/her music, so too could oil service companies license the use of the CAD data required to print replacement parts.
3D printing was listed by BP (formerly known as British Petroleum) as one of six technologies that will impact the energy sector significantly in the next few years, along with artificial intelligence, block chain, autonomous vehicles and alternative energy sources. According to the multinational based out of London, the dual challenge of meeting growing demand in the developing world, while working to reduce carbon emissions, is creating a constant stream of ideas with the potential to transform how the world produces and consumes energy.
Some years back Royal Dutch Shell began using 3D printing technology to make the design and construction of equipment used in oil and gas production faster and more efficient. Printing technology allows the company to create accurate scale prototypes in materials like plastic, which it tests and uses to improve designs and construction processes. The Shell Technology Centre Amsterdam, in the Netherlands, is the key site for this type of endeavors, led by mechanical instrument engineer, Joost Kroon.
In the wild and wonderful world of technology, there is always a new trending topic. Right now, 4D printing becomes the hot new topic. In this article, we try to give you an overview of 4D printing, it’s technology, uses, and implications.
What is 4D Printing?
3D printing, also known as “Additive Manufacturing”, turns digital blueprints to physical objects by building them layer by layer. 4D printing is based on this technology. With one big difference: It uses special materials and sophisticated designs that are “programmed” to prompt your 3D print to change its shape.
So, basically, 4D printing is a renovation of 3D printing wherein special materials to print objects that change shape post-production. A trigger may be water, heat, wind and other forms of energy.
Who invented 4D Printing ?
You can’t really pinpoint 4D printing to one inventor. 4D printing is currently developed by many industry leaders and research facilities. As of 2017, the most important 4D printing companies/research labs are MIT’s Self-Assembly Lab, 3D printing manufacturer Stratasys, and 3D software company Autodesk.
However, as you can see below, Australian and Singaporean researchers are rapidly gaining momentum. Their contributions extend the range of materials suitable for 4D printing and help in bringing the technology closer to marketability.
The technology is still pretty much in research and development. In some labs or prototyping facilities, 4D printing is already used. You may also witness this technology as part of art installations and architectural exhibitions.
Where Can You Buy a 4D Printer?
But as a consumer, you can’t just walk into a store and buy a “4D printer” or get a license to “4D print” something. It is more likely that you will one day cross paths with 4D printing in your everyday life without even realizing it. Be it in the form of medical implants or mechanical systems that reshape their configuration with the change of environmental conditions.
How Does 4D Printing Work?
Imagine having a box that was printed with a 3D printer. That alone in itself is cool, but imagine if that box could automatically flatten itself for packing once it was impacted by some stimuli. It almost sounds silly when we just consider the impact of a box going from 3D to 2D (by flattening itself), but the impact that simple things like these can have in the business world is massive.
For example, let us assume that a trucking company (we’ll call them Tucker Trucking for fun) has a warehouse where they store all of their shipping boxes. Whenever this trucking company receives a shipment of goods, they remove the goods from the boxes for delivery to their individual sites, and then they flatten the boxes to ship them back out to their departure point so that they can be re-used for other shipments.
Now, imagine that this same company flips 5,000 trucks in a day. So, they have to hire 200 people to constantly break down the boxes for shipment back out. At $10/hour, assuming a 7-hour working day, Tucker Trucking is paying $14,000/day to these basic labor employees.
So, by having boxes that flatten themselves upon stimulus, such a company could save approximately $5 million every single year! And this is just one example of how useful 4D printing could be!
In 4D Printing, You Need Trigger Mechanisms
In 4D printing, you need some stimuli or trigger to start the transformation. These can include water, heat, light, or electrical currents. There are other forms of triggers, some of which have to be explored in depth through research.
For some of the 4D printing processes, you need special materials that are able to react to these triggers. It’s making the objects “programmable” and execute their 3D printed “genetic code” whenever you want to have it triggered.
Other research labs focus on “programming” the object’s desired shape into the micro-structure of standard materials. This approach makes use of the capabilities discovered in microscopic structures. When these are correctly configured, they evince the desired deformation of the macro-structure. The advantage being that 4D printed objects can harness existing 3D printers and materials.
4D Printing: What's Next?
As was mentioned above, 4D printing is in the adolescent stages of becoming a science. However, there are a few concentrated groups of scientists across the globe who believe that the practicality of 4D printing may become a reality and one of the fast growing technologies in the near or medium-term future.
There are a variety of examples that truly show how far this technology has come. Think of simplistic folding objects to programmable shapeshifting materials and hydrogel composites. Years of research and testing will eventually lead to amazing inventions, such as adaptive medical implants, self-assembling buildings, and even 4D printed soft robots.
The myriad of uses that can come from 4D printing technology are very enticing. So make sure to keep your eye on this new and engaging industry in the coming years as new developments facilitate the betterment of our lives in this wonderful world of tomorrow.
Food… What we eat, and how we grow it, will be fundamentally transformed in the next decade.
Already, vertical farming is projected to exceed a US$12 billion industry by mid-decade, surging at an astonishing 25 percent annual growth rate.
Meanwhile, the food 3D printing industry is expected to grow at an even higher rate, averaging nearly 40 percent annual growth.
And converging exponential technologies—from materials science to AI-driven digital agriculture—are not slowing down. Today’s breakthroughs will soon allow our planet to boost its food production by nearly 70 percent, using a fraction of the real estate and resources, to feed 9 billion by mid-century.
What you consume, how it was grown, and how it will end up in your stomach will all ride the wave of converging exponentials, revolutionizing the most basic of human needs.
3D printing has already had a profound impact on the manufacturing sector. We are now able to print in hundreds of different materials, making anything from toys to houses to organs. However, we are finally seeing the emergence of 3D printers that can print food itself.
Redefine Meat, an Israeli startup, wants to tackle industrial meat production using 3D printers that can generate meat, no animals required. The printer takes in fat, water, and three different plant protein sources, using these ingredients to print a meat fiber matrix with trapped fat and water, thus mimicking the texture and flavor of real meat.
Slated for release in 2020 at a cost of $100,000, their machines are rapidly demonetizing and will begin by targeting clients in industrial-scale meat production.
Anrich3D aims to take this process a step further, 3D-printing meals that are customized to your medical records, heath data from your smart wearables, and patterns detected by your sleep trackers. The company plans to use multiple extruders for multi-material printing, allowing them to dispense each ingredient precisely for nutritionally optimized meals. Currently in an R&D phase at the Nanyang Technological University in Singapore, the company hopes to have its first taste tests in 2020.
These are only a few of the many 3D food printing startups springing into existence. The benefits from such innovations are boundless.
Not only will food 3D printing grant consumers control over the ingredients and mixtures they consume, but it is already beginning to enable new innovations in flavor itself, democratizing far healthier meal options in newly customizable cuisine categories.
Vertical farming, whereby food is grown in vertical stacks (in skyscrapers and buildings rather than outside in fields), marks a classic case of converging exponential technologies. Over just the past decade, the technology has surged from a handful of early-stage pilots to a full-grown industry.
Today, the average American meal travels 1,500-2,500 miles to get to your plate. As summed up by Worldwatch Institute researcher Brian Halweil, “we are spending far more energy to get food to the table than the energy we get from eating the food.”
Additionally, the longer foods are out of the soil, the less nutritious they become, losing on average 45 percent of their nutrition before being consumed.
Yet beyond cutting down on time and transportation losses, vertical farming eliminates a whole host of issues in food production.
Relying on hydroponics and aeroponics, vertical farms allows us to grow crops with 90 percent less water than traditional agriculture—which is critical for our increasingly thirsty planet.
Currently, the largest player around is Bay Area-based Plenty Inc. With over $200 million in funding from Softbank, Plenty is taking a smart tech approach to indoor agriculture. Plants grow on 20-foot high towers, monitored by tens of thousands of cameras and sensors, optimized by big data and machine learning.
This allows the company to pack 40 plants in the space previously occupied by one. The process also produces yields 350X greater than outdoor farmland, using less than 1 percent as much water.
And rather than bespoke veggies for the wealthy few, Plenty’s processes allow them to knock 20-35 percent off the costs of traditional grocery stores. To date, Plenty has their home base in South San Francisco, a 100,000 square-foot farm in Kent, Washington, an indoor farm in the United Arab Emirates, and recently started construction on over 300 farms in China.
Another major player is New Jersey-based Aerofarms, which can now grow 2 million pounds of leafy greens without sunlight or soil.
To do this, Aerofarms leverages AI-controlled LEDs to provide optimized wavelengths of light for each individual plant. Using aeroponics, the company delivers nutrients by misting them directly onto the plants’ roots— no soil required. Rather, plants are suspended in a growth mesh fabric made from recycled water bottles. And here too, sensors, cameras and machine learning govern the entire process.
While 50-80 percent of the cost of vertical farming is human labor, autonomous robotics promises to solve that problem. Enter contenders like Iron Ox, a firm that has developed the Angus robot, capable of moving around plant-growing containers.
The writing is on the wall, and traditional agriculture is fast being turned on its head. As explained by Plenty’s CEO Matt Barnard, “Just like Google benefitted from the simultaneous combination of improved technology, better algorithms and masses of data, we are seeing the same [in vertical farming].”
In an era where materials science, nanotechnology, and biotechnology are rapidly becoming the same field of study, key advances are enabling us to create healthier, more nutritious, more efficient, and longer-lasting food.
For starters, we are now able to boost the photosynthetic abilities of plants.
Using novel techniques to improve a micro-step in the photosynthesis process chain, researchers at UCLA were able to boost tobacco crop yield by 14-20 percent. Meanwhile, the RIPE Project, backed by Bill Gates and run out of the University of Illinois, has matched and improved those numbers.
And to top things off, The University of Essex was even able to improve tobacco yield by 27-47 percent by increasing the levels of protein involved in photo-respiration.
In yet another win for food-related materials science, Santa Barbara-based Apeel Sciences is further tackling the vexing challenge of food waste. Now approaching commercialization, Apeel uses lipids and glycerolipids found in the peels, seeds, and pulps of all fruits and vegetables to create “cutin”—the fatty substance that composes the skin of fruits and prevents them from rapidly spoiling by trapping moisture.
By then spraying fruits with this generated substance, Apeel can preserve foods 60 percent longer, using an odorless, tasteless, colorless organic substance.
And stores across the U.S. are already using this method. By leveraging our advancing knowledge of plants and chemistry, materials science is allowing us to produce more food with far longer-lasting freshness and more nutritious value than ever before.
With advances in 3D printing, vertical farming and materials sciences, we can now make food smarter, more productive, and far more resilient.
By the end of the next decade, you should be able to 3D print a fusion cuisine dish from the comfort of your home, using ingredients harvested from vertical farms, with nutritional value optimized by AI and materials science. However, even this picture doesn’t account for all the rapid changes underway in the food industry.
Join me next week for Part 2 of the Future of Food for a discussion on how food production will be transformed, quite literally, from the bottom up.
Could a hamburger grown in a lab from Kobe beef stem cells be cheaper, better tasting and healthier for you?
Can you imagine a future where millions of square miles of pastoral land are reclaimed by nature, creating new forests and revitalizing the Earth’s vital carbon sinks?
Last week, we discussed the hyper-efficient food production systems of 2030. This week, we continue that discussion, but from a different perspective—because by the end of the next decade, we will witness the end of industrial animal agriculture as we know it.
Through the convergence of biotechnology and AgTech, we will witness the birth of the most ethical, nutritious, and environmentally sustainable food system ever devised by mankind.
Let’s dive in.
Cultured meat
Meat production is problematic, to say the least. A quarter of the planet’s available landmass is currently used to keep 20 billion chickens, 1.5 billion cattle and 1 billion sheep alive—that is, until we can kill them and eat them.
The suffering quotient is through the roof. As is the waste.
One out of seven Americans will go to bed hungry tonight, yet farm animals consume 30 percent of the world’s food crops.
Worse is the water involved. Meat production accounts for 70 percent of global water use. Compared to 1,500 liters required to produce a kilogram of wheat, it takes 15,000 liters to produce a kilogram of beef, meaning there’s enough water in an adult steer to float a U.S. Navy destroyer.
Meat is also responsible for 14.5 percent of all greenhouse gases and a considerable portion of our deforestation problem. In fact, we are in the midst of one of the largest mass extinctions in history, and loss of land for agricultural production is currently the largest driver of that extinction.
Enter cultured meat–meat that is grown from a few cells into a full-blown steak.
Take a few stem cells from a live animal, typically via a biopsy so the animal isn’t harmed. Feed these cells a nutrient-rich solution. Power the whole process in bioreactors. Give the industry a few years to mature and the technology a few years to shed costs and, finally, we can produce an infinite number of steaks to feed an increasingly carnivorous population.
There are still numerous hurdles to overcome in the process, but we are fast approaching the point at which converging exponential technologies will enable this transformation of today’s food system.
Environmental issues aside, cultured meat has the potential to become far more cost-effective than conventional meat. It will soon compete with the latter on almost every market-oriented criteria in existence.
For starters, cultured meat production is mostly an automated process without much need for land or labor. Plus, it takes a few years to grow a cow in the wild, but only a few weeks to grow a cow’s worth of steak in the lab.
And it’s more than just steak. The meats in development range from pork sausage and chicken nuggets to foie gras and filet mignon—it all depends on which stems cells you start out with.
In late 2018, for example, Just Foods announced a partnership with Japanese Wagyu beef producer Toriyama to develop cultured meat from the cells of what has long been the rarest and most expensive steaks on Earth.
And what’s true for meat is also true for milk.
Perfect Day Foods, a Berkeley, California-based company started by two founders with a passion for cheese, has figured out how to make the animal-derived product without any involvement from cows. Combining gene sequencing with 3D printing and fermentation science, they’ve created a line of animal-free dairy products.
So what does this all mean? A fundamental reconfiguration of the way we source, consume, and pay for food—not to mention the environmental costs that are often written off as “externalities.”
Such a transformation will revamp our world in ways we have only begun to imagine.
The decimation of resources alone is considerable. Cultured meat uses 99 percent less land, 82-96 percent less water, and produces 78-96 percent less greenhouses gases.
Energy use drops somewhere between 7 and 45 percent depending on the meat involved (traditional chicken ranching is much more energy-intensive than traditional beef ranching).
And by liberating a quarter of our landmass, we can also reforest, providing sufficient habitat required to halt the biodiversity crisis and revitalize tremendous natural carbon sinks needed to slow global warming.
While that’s a haze of numbers to consider, what they add up to is astounding: An ethical and environmental solution to world hunger.
It's also a much healthier solution. As we will soon be growing steak from stem cells, we can do the impossible: Make fast food hamburgers that are actually good for you. We’ll be newly able to increase helpful proteins, reduce saturated fat, even add vitamins.
Another huge win: cultured meat requires no antibiotics. Given the danger of diseases like mad cow disease, next-gen meat consumption will be far safer for humans, reducing—if not eliminating—the food industry’s industrial hygiene challenges.
In fact, as 70 percent of emerging diseases come from livestock, by turning to cultured meat, we’re both lowering the global disease burden and decreasing our risk of pandemic.
Over the last two weeks, we have explored how converging exponentials will revolutionize one of humanity’s most basic needs.
By the end of the next decade, anyone anywhere will have on-demand, ultra-cheap access to lab-grown meat— far more nutritious than livestock-derived products, with a near-zero carbon footprint and safety guarantees.
Don’t want to leave home? The rise of vertical farming, autonomous drone networks, and last-mile delivery advancements will make food deliverable to your doorstep, sourced from a low-land use food production center. “Local” foods will be truly local. Either that, or download physiology-optimized recipes to your in-home food 3D printer.
While traditional agriculture has experienced shifts and industrialization, growing food has roughly been the same since 10,000 BC.
Soon to undergo one of the most monumental technological revolutions in history, our food system is about to be leagues more efficient, ethical, and sustainable than ever before—not to mention far healthier.
In just a few years, humans will become the first animal that derives our protein from other animals, yet doesn’t harm anyone in the process. Meat miles—that is, the transportation costs involved in moving meat around—will nearly disappear. Slaughterhouses will become a ghost story we tell our grandchildren.
And a planet that is already significantly strained under the weight of seven billion will have a fighting chance as we grow to ~10 billion by 2050.
The facility lies midway between Munich’s city center and its international airport, roughly 23 miles to the north. From the outside, it still looks like the state-run farm it once was, but peer through the windows of the old farmhouse and you’ll see rooms stuffed with cutting-edge laboratory equipment.
When Kessler unlocks one pen to show off its resident, a young sow wanders out and starts exploring. Like other pigs here, the sow is left nameless, so her caregivers won’t get too attached. She has to be coaxed back behind a metal gate. To the untrained eye, she acts and looks like pretty much any other pig, but smaller.
It’s what’s inside this animal that matters. Her body has been made a little less pig-like, with four genetic modifications that make her organs more likely to be accepted when transplanted into a human. If all goes according to plan, the heart busily pumping inside a pig like this might one day beat instead inside a person.
Different types of tissues from genetically engineered pigs are already being tested in humans. In China, researchers have transplanted insulin-producing pancreatic islet cells from gene-edited pigs into people with diabetes. A team in South Korea says it’s ready to try transplanting pig corneas into people, once it gets government approval. And at Massachusetts General Hospital, researchers announced in October that they had used gene-edited pig skin as a temporary wound covering for a person with severe burns. The skin patch, they say, worked as effectively as human skin, which is much harder to obtain.
But when it comes to life-or-death organs, like hearts and livers, transplant surgeons still must rely on human parts. One day, the dream goes, genetically modified pigs like this sow will be sliced open, their hearts, kidneys, lungs and livers sped to transplant centers to save desperately sick patients from death.
Read - Tetraplegic man walks with mindreading exoskeleton
Thibault had surgery to place two implants on the surface of the brain, covering the parts of the brain that control movement. Sixty-four electrodes on each implant read the brain activity and beam the instructions to a nearby computer.Sophisticated computer software reads the brainwaves and turns them into instructions for controlling the exoskeleton. When he thinks "walk" it sets off a chain of movements in the robotic suit that move his legs forward. And he can control each of the arms, maneuvering them in three-dimensional space
Thibault, who does not want his surname revealed, was an optician before he fell 15m in an incident at a night club four years ago.The injury to his spinal cord left him paralysed and he spent the next two years in hospital.But in 2017, he took part in the exoskeleton trial with Clinatec and the University of Grenoble.Initially he practised using the brain implants to control a virtual character, or avatar, in a computer game, then he moved on to walking in the suit. "It was like [being the] first man on the Moon. I didn't walk for two years. I forgot what it is to stand, I forgot I was taller than a lot of people in the room," he said..It took a lot longer to learn how to control the arms."It was very difficult because it is a combination of multiple muscles and movements. This is the most impressive thing I do with the exoskeleton."
The 65kg of sophisticated robotics is not completely restoring function.However, it is a marked advance on similar approaches that allow people to control a single limb with their thoughts.
Thibault does need to be attached to a ceiling-harness in order to minimise the risk of him falling over in the exoskeleton - it means the device is not yet ready to move outside the laboratory.
"This is far from autonomous walking," Prof Alim-Louis Benabid, the president of the Clinatec executive board, told BBC News."He does not have the quick and precise movements not to fall, nobody on earth does this.".In tasks where Thibault had to touch specific targets by using the exoskeleton to move his upper and lower arms and rotate his wrists, he was successful 71% of the time.
Prof Benabid, who developed deep brain stimulation for Parkinson's disease, told the BBC: "We have solved the problem and shown the principle is correct. This is proof we can extend the mobility of patients in an exoskeleton.."This is in [the] direction of giving better quality of life."
The French scientists say they can continue to refine the technology.
At the moment they are limited by the amount of data they can read from the brain, send to a computer, interpret and send to the exoskeleton in real-time.
They have 350 milliseconds to go from thought to movement otherwise the system becomes difficult to control.
It means out of the 64 electrodes on each implant, the researchers are using only 32.
So there is still the potential to read the brain in more detail using more powerful computers and AI to interpret the information from the brain.
There are also plans to develop finger control to allow Thibault to pick up and move objects.
He has already used the implant to control a wheelchair.
There are scientists investigating ways of using exoskeletons to enhance human abilities, a field known as transhumanism, rather than overcome paralysis.This includes military applications.
"We are absolutely not going in the direction of these extreme and stupid applications," Prof Benabid told the BBC."Our job is to repair the injured patients who have lost function."
Prof Tom Shakespeare, from the London School of Hygiene and Tropical Medicine, said while this study presents a "welcome and exciting advance", proof of concept was a long way from usable clinical possibility.
"A danger of hype always exists in this field. Cost constraints mean that hi-tech options are never going to be available to most people in the world with spinal cord injury."
Only 15% of people with disabilities had a wheelchair or other assistive devices, he said.
Details of the exoskeleton have been published in Lancet Neurology Journal
Thibault had surgery 2 implants on brain surface, covering the brain parts that control movement
The company's research division published a blog post about the new research yesterday, and researchers will present the method at the ACM Symposium on User Interface Software and Technology tomorrow.
The real-world path and corresponding virtual environment are planned in advance based on geolocation data, then updated on the fly as required in an uncontrolled outdoor environment. Any potential obstacles the user encounters while traversing real space are recorded by real-time sensing technologies in the VR apparatus, including a dual-band GPS sensor, two RGB depth cameras, and "a Windows Mixed Reality-provided relative position trace." Those obstacles may be replaced by obstacles in the virtual world, such as road blocks. Additionally, a video-game quest marker-like arrow will direct the user in what is deemed to be a safe and efficient traversal direction.
"Discovered obstacles that may move or appear in users' paths are managed by introducing moving virtual obstacles, or characters, such as pedestrians walking near users, to block them from any potential danger," the blog post explains. "Other options for controlling users' paths may include pets and dynamic events such as vehicles being parked, moving carts, and more, limited only by the imagination of the experience creator."
The system tries its best to introduce these virtual objects outside of the user's field of view to minimize unrealistic popup, not dissimilar to how a 3D video game environment uses streaming and frustum culling to maximize performance or introduce new assets to the scene.
DreamWalker actually uses another Microsoft Research-developed tech called Mise-Unseen—which "allows covert changes to be made to a VR environment while a user's gaze and attention are fixed on something else"—to do this thanks to eye-tracking tech. While DreamWalker had a predecessor project called VRoamer that attempted to deliver safe experiences in uncontrolled indoor environments, most applications of VR being researched today or sold to consumers or businesses require a controlled space for safe use. DreamWalker is not ready for the public yet, but it's a step toward unexpected applications of VR that may have previously only been considered for AR—though there's a lot of crossover in AR and VR research and product development across the industry.
When Google Glass was first introduced many years ago, some commentators worried about the privacy and safety implications. Several Pokémon Go-related accidents introduced more questions about how technology companies can act responsibly when selling AR or VR products or experiences and what municipalities and legislators need to be thinking about moving forward. And with Apple expected to launch AR glasses as soon as next year, these questions are only going to become more pressing. It was enough to think about safety when using AR in public spaces, but the potential challenges and applications for VR in public will need further consideration, too.
"We are working in the field of swarm of drones and my previous research in the field of haptics was very helpful in introducing a new frontier of tactile human-swarm interactions," Dzmitry Tsetserukou, Professor at Skoltech and head of Intelligent Space Robotics laboratory, told TechXplore. "During our experiments with the swarm, however, we understood that current interfaces are too unfriendly and difficult to operate."
While conducting research investigating strategies for human-swarm interaction, Tsetserukou and his colleagues realised that there are currently no available interfaces that allow human operators to easily deploy a swarm of robots and control its movements in real time. At the moment, most swarms simply follow predefined trajectories, which have been set out by researchers before the robots start operating.
The human-swarm interaction strategy proposed by the researchers, on the other hand, allows a human user to guide the movements of a swarm of nano-quadrotor robots directly. It does this by considering the velocity of the user's hand and changing the swarm's formation shape or dynamics accordingly, using simulated impedance interlinks between the robots to produce behaviors that resemble those of swarms occurring in nature.
The system devised by the researchers includes a wearable tactile display that delivers patterns of vibration onto a user's fingers in order to inform him/her of current swarm dynamics (i.e., if the swarm is expanding or shrinking). These vibration patterns allow human users to change the swarm dynamics so that the swarm can avoid obstacles simply by moving their hands at different speeds or in different directions.
The system detects the position of the user's hand using a highly precise motion capturing system called Vicon Vantage V5. In addition, the human operator and individual robots in the swarm are connected through impedance interlinks.
"These links behave like springs-dampers," Tsetserukou explained. "They prevent drones from flying close to the operator and to each other and from starting or stopping abruptly. Our strategy considerably improves the safety of human-swarm interactions and makes the behaviors of the swarm similar to those of real biological systems (e.g. bee swarms)."
In the future, SwarmTouch, the strategy developed by Tsetserukou and his colleagues, could be used to train swarms to navigate in warehouses, deliver goods within urban environments and even inspect bridges or other infrastructures. The researchers will soon be presenting another approach, called CloakSwarm, at the ACM Siggraph Asia 2019 conference.
They are also working on two additional drone-human interaction strategies, SlingDrone and WiredSwarm, which will be demonstrated at the ACM VRST 2019 conference. SlingDrone, the first of these strategies, is a mixed reality paradigm that allows users to operate drones using a pointing controller in an interactive way, producing a slingshot-like motion.
"This approach is somewhat similar to the popular mobile game Angry Birds, but with users pulling a real drone with a rope instead of on a touch screen, in order to navigate its ballistic trajectory in virtual reality," Tsetserukou explained. "SlingDrone allows you to point a virtual drone in the direction you want it to fly in and at the same time a real drone will fly to the target position and bring you the object you wish to get hold of. WiredSwarm, on the other hand, is a swarm of drones that are attached to the user's fingers with leashes, which can provide high-fidelity haptic feedback to a VR user. We call this new type of interface the first flying wearable haptic interface."
The team used artificial evolution to enable the robots to automatically learn swarm behaviors which are understandable to humans. This new advance published today in Advanced Intelligent Systems, could create new robotic possibilities for environmental monitoring, disaster recovery, infrastructure maintenance, logistics and agriculture.
Until now, artificial evolution has typically been run on a computer which is external to the swarm, with the best strategy then copied to the robots. However, this approach is limiting as it requires external infrastructure and a laboratory setting.
By using a custom-made swarm of robots with high processing power embedded within the swarm, the Bristol team were able to discover which rules give rise to desired swarm behaviors. This could lead to robotic swarms which are able to continuously and independently adapt in the wild, to meet the environments and tasks at hand. By making the evolved controllers understandable to humans, the controllers can also be queried, explained and improved.
Lead author, Simon Jones, from the University of Bristol's Robotics Lab said: "Human-understandable controllers allow us to analyze and verify automatic designs, to ensure safety for deployment in real-world applications."
Co-led by Dr. Sabine Hauert, the engineers took advantage of the recent advances in high-performance mobile computing, to build a swarm of robots inspired by those in nature. Their "Teraflop Swarm' has the ability to run the computationally intensive automatic design process entirely within the swarm, freeing it from the constraint of off-line resources. The swarm reaches a high level of performance within just 15 minutes, much faster than previous embodied evolution methods, and with no reliance on external infrastructure.
Dr. Hauert, Senior Lecturer in Robotics in the Department of Engineering Mathematics and Bristol Robotics Laboratory (BRL), said: "This is the first step towards robot swarms that automatically discover suitable swarm strategies in the wild."
"The next step will be to get these robot swarms out of the lab and demonstrate our proposed approach in real-world applications."
By freeing the swarm of external infrastructure, and by showing that it is possible to analyze, understand and explain the generated controllers, the researchers will move towards the automatic design of swarm controllers in real-world applications.
In the future, starting from scratch, a robot swarm could discover a suitable strategy directly in situ, and change the strategy when the swarm task, or environment changes.
Professor Alan Winfield, BRL and Science Communication Unit, UWE, said: "In many modern AI systems, especially those that employ Deep Learning, it is almost impossible to understand why the system made a particular decision. This lack of transparency can be a real problem if the system makes a bad decision and causes harm. An important advantage of the system described in this paper is that it is transparent: its decision making process is understandable by humans."
...The researchers use artificial intelligence to turn two-dimensional images into stacks of virtual three-dimensional slices showing activity inside organisms.
“This is a very powerful new method that is enabled by deep learning to perform 3D imaging of live specimens, with the least exposure to light, which can be toxic to samples,” said senior author Aydogan Ozcan, UCLA chancellor’s professor of electrical and computer engineering and associate director of the California NanoSystems Institute at UCLA.
In addition to sparing specimens from potentially damaging doses of light, this system could offer biologists and life science researchers a new tool for 3D imaging that is simpler, faster and much less expensive than current methods. The opportunity to correct for aberrations may allow scientists studying live organisms to collect data from images that otherwise would be unusable. Investigators could also gain virtual access to expensive and complicated equipment.
This conversion is valuable because the confocal microscope creates images that are sharper, with more contrast, compared to the wide field. On the other hand, the wide-field microscope captures images at less expense and with fewer technical requirements.
Waymo has been facing challenges to commercialize self-driving cars. Morgan Stanley valuation on Wayma by 40% last month from $175 billion to $105 billion, concluding that the industry is moving toward commercialization more slowly than expected, and noted that Waymo still relies on human safety drivers, as reported by CNBC.
“Waymo is growing our investment and teams in both the Detroit and Phoenix areas, and we want to bring our operations teams together in these locations to best support our riders and our ride-hailing service,” a Waymo spokesperson said in a statement sent Friday to CNBC. “As a result we’ve decided to relocate all Austin positions to Detroit and Phoenix. We are working closely with employees, offering them the opportunity to transfer, as well as with our staffing partners to ensure everyone receives transition pay and relocation assistance.”
To get from A to B, you didn’t have to move your body? What if you could quote Captain Kirk and just say: “Beam me up, Scotty.” Well, shy of the Star Trek transporter, there’s the world of avatars.
An avatar is a second self, typically in one of two forms. The digital version has been around for a couple of decades. It emerged from the video game industry and was popularized by virtual world sites like Second Life and books-turned-blockbusters like Ready Player One.
A VR headset teleport your eyes and ears to another location, while a set of haptic sensors shifts your sense of touch. Suddenly, you’re inside an avatar inside a virtual world. As you move in the real world, your avatar moves in the virtual. Use this technology to give a lecture and you can do it from the comfort of your living room, skipping the trip to the airport, the cross-country flight, and the ride to the conference center.
Robots are the second form of avatars. Imagine a humanoid robot that you can occupy at will. Maybe, in a city far from home, you’ve rented the bot by the minute—via a different kind of ride sharing company—or maybe you have spare robot avatars located around the country.
Either way, put on VR goggles and a haptic suit, and you can teleport your senses into that robot. This allows you to walk around, shake hands, and take action—all without leaving your home.
And like the rest of the tech we’ve been talking about, even this future isn’t far away.
In 2018, entrepreneur Dr. Harry Kloor recommended to All Nippon Airways (ANA), Japan’s largest airline, the design of an Avatar XPRIZE. ANA then funded this vision to the tune of $10 million to speed the development of robotic avatars. Why? Because ANA knows this is one of the technologies likely to disrupt their own airline industry, and they want to be ready.
ANA recently announced its “newme” robot that humans can use to virtually explore new places. The colorful robots have Roomba-like wheeled bases and cameras mounted around eye-level, which capture surroundings viewable through VR headsets.
If the robot was stationed in your parents’ home, you could cruise around the rooms and chat with your family at any time of day. After revealing the technology at Tokyo’s Combined Exhibition of Advanced Technologies in October, ANA plans to deploy 1,000 newme’s by 2020.
With virtual avatars like “newme,” geography, distance, and cost will no longer limit our travel choices.From attractions like the Eiffel Tower or the pyramids of Egypt, to unreachable destinations like the Moon or deep sea, we will be able to transcend our own physical limits, explore the world and outer space, and access nearly any experience imaginable.
Over the past several years, the cryptocurrency and blockchain space has seen immense growth, not only regarding digital asset prices, but also in the many startups and existing businesses that are now involved in the industry and its underlying technology.
Numbers from the past year, however, show decreased interest from job seekers even though the number of jobs in the crypto and blockchain industry has grown, according to recent data gathered by popular employment site Indeed.
From September 2018 to September 2019, “the share of cryptocurrency job postings per million on Indeed have increased by 26%, while the share of job searches per million have decreased by 53%,” Indeed detailed in a report provided to me.
These numbers are also down from the figures posted one year prior. September 2017 to September 2018 tallied a 214% growth in crypto and blockchain jobs on Indeed with a 14% growth in related job seeker searches, according to an email from an Indeed representative.
By allowing digital information to be distributed but not copied, blockchain technology created the backbone of a new type of internet.
A blockchain is, in the simplest of terms, a time-stamped series of immutable records of data that is managed by a cluster of computers not owned by any single entity. Each of these blocks of data (i.e. block) is secured and bound to each other using cryptographic principles (i.e. chain).
A blockchain carries no transaction cost. (An infrastructure cost yes, but no transaction cost.) The blockchain is a simple yet ingenious way of passing information from A to B in a fully automated and safe manner. One party to a transaction initiates the process by creating a block. This block is verified by thousands, perhaps millions of computers distributed around the net. The verified block is added to a chain, which is stored across the net, creating not just a unique record, but a unique record with a unique history. Falsifying a single record would mean falsifying the entire chain in millions of instances. That is virtually impossible. Bitcoin uses this model for monetary transactions, but it can be deployed in many other ways.
Another example. The gig economy hub Fivver charges 0.5 dollars on a 5 transaction between individuals buying and selling services. Using blockchain technology the transaction is free. Ergo, Fivver will cease to exist. So will auction houses and any other business entity based on the market-maker principle.
Even recent entrants like Uber and Airbnb are threatened by blockchain technology. All you need to do is encode the transactional information for a car ride or an overnight stay, and again you have a perfectly safe way that disrupts the business model of the companies which have just begun to challenge the traditional economy. We are not just cutting out the fee-processing middle man, we are also eliminating the need for the match-making platform.
Because blockchain transactions are free, you can charge minuscule amounts, say 1/100 of a cent for a video view or article read. Why should I pay The Economist or National Geographic an annual subscription fee if I can pay per article on Facebook or my favorite chat app? Again, remember that blockchain transactions carry no transaction cost. You can charge for anything in any amount without worrying about third parties cutting into your profits.
The reason why the blockchain has gained so much admiration is that:
The three main properties of Blockchain Technology which have helped it gain widespread acclaim are as follows:
Get immersed into Blockchain world , click the image to get in.
As Wikipedia says "the physical components of interrelated systems providing commodities and services essential to enable, sustain, or enhance societal living conditions" is Infrastructure and “the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses” is what is cognition as defined in Oxford Dictionary. Cognition includes many processes like perception, learning, reasoning, memory, and communication with other cognitive systems.Building Infrastructure has been there since the day of human evolution. Traditionally this has been the understanding of infrastructure. Huge monuments, roads, rail networks, irrigation facilities, hospitals, educational institutions etc. are typical form of infrastructures we have developed for ourselves. Recently few decades we have seen a significant shift in our understanding of the infrastructure. Humans remaining a common factor there has been inclusion of internet, connected devices, GPS system, servers, AI programs, software and hardware in and around, datacenters, cloud as a software hosting location, drone technologies, robotic environment etc. in combination form what can be termed as Cognitive Infrastructure. We have been realizing the combined impact of the cognitive infrastructure as a potential game changer in all spheres of our existence.
Social aspects may be which are undesired with respect to fundamental human activities like political interference, war, expansionism mindset, software and cyber dominance are the potential outcomes of cognitive infrastructure and its constituents. As we discussed cyber dominance, all forms of cybercrimes like ransomware, army of bots which roam through private, public and corporate as well as governments linked in, Facebook, Instagram, twitter etc. and cause massive data security issues and breaches are examples of integrated cognitive infrastructure. Regulating the emerging integrated cognitive infrastructure needs thorough understanding of the characteristics and behaviors of its constituents. Pentagon (US Military) has formed Defense Innovation Board which laid 5 ethical policy points to regulate AI, which is a critical constituent of cognitive infrastructure, is too complex for human cognition.While there has been a gradual progress in infrastructure development like irrigation systems, electrification, communication development, entertainment upsurge etc. these are examples of complex yet intelligent to somewhat single systems. With the emergence of exponential technologies combined with innovation is other techno commercial areas cognitive infrastructure proved to be meta infrastructure involving integration capacity with almost all systems and technologies having potential accelerating capabilities . Disruptive technologies like AI, 5G, IoT, Social media, Big data, cloud computing etc. are seemingly potential components of cognitive infrastructure.Recent prediction says around 30 billion devices like transportation systems, household appliances, gadgets, factory equipment and of course computers along with 1 trillion sensors will be connected though Internet and around 425 million servers will be deployed globally by 2020 end. These devices are powered with high end sensors, data storing, analyzing capable action-oriented objects can predict, take corrective action as designed.
As Peter Diamandis describes Humans are linear while technologies are exponential in terms of thinking and cognition. These technologies form the cognitive infrastructure. However, the policy makers and government agencies need to conceptualize and make reality check to manage and monitor the emergence of cognitive infrastructure in a way that not at the cost of tech advancements.
DGTAL.AI Inc.16192,Coastal Highway,Lewes,Delaware
UN identified Sustainable Development Goals(SDG) , which need attention of world communities, Technological innovations offer great solution to address these. We at DGTAL.AI discuss SDGs and Technological solutions under single platform for the benefit of tech enthusiasts and talents and add value towards the solutions, thereby serving the cause of a billion people.DGTAL.AI is not for profit initiative.
Copyright © 2020, DGTAL.AI Inc.
DGTAL.AI News & Views of Exponential Tech world