Fresh Expotechs, right with you...Check your interest area and go on...Specific area of interest ,not here ?
Let us know at Interactions we will deliver in your box...
Your cloth wearing behavior is understood by AI and next design choice command is passed to Alexa to place the order for a new pair of T-shirt and Denim Jeans for you, 3D Printer manufactures in minutes with full customized specs, drones delivering the new pair at the doorstep, your account is debited. On 8th Feb morning you opened the door to receive the new pair as a birthday gift, gifted by machine and that’s the time you came to about the order. Happy!!!
Its not far, todays shopping mall may not exist. Prepare for a future in which the retail supply chain is simplified and localized, with on-the-spot 3D printing that can meet any demand.
3D Printed Clothing
Wohlers Report forecasts 3D printing will double to US$35.6 billion by 2024. The new generation 3D printers are now manufacturing fully customized cloths almost from any wearable material at mass scale.
In 2010, 4 MIT grads hated their formal outfits and planned to bring a whiz-bang to the board room. Together, they formed the Ministry of Supply, a clothing company intent on borrowing space suit technology from NASA for a line of dress shirts. Followed by introduction of Phase change materials to control body heat and reduce perspiration and odor. It also adapts to the wearer’s shape, and stays tucked in and wrinkle-free all day. Ministry of Supply now makes high-performance smart clothing for both sexes, including a new line of intelligent jackets that respond to voice commands and learn to automatically heat to your desired temperature. And recently, they extended their high-tech approach to manufacturing. Machines with 4,000 individual needles and a dozen different yarns, the printer can create any combination of materials and colors desired, with zero waste.
3D Printing Other Retail Sectors
Fashion clothing industry is only a part of the story, as 3D printing is now showing up all over retail. Staples, the office supply company, launched an online version whereby customers upload designs for office products from home, Staples employees print them in-store, and the final product is delivered to your door. The French hardware manufacturer Leroy Merlin has even taken this a step further, allowing customers to print bespoke hardware in their stores. Need a ten-inch flat-head nail or a curving socket wrench made to reach around corners? They’ve got you covered.
Personal AIs will draw in new customers, populating stores equipped with virtual try-on mirrors, clothing pre-tailored to fit us perfectly, and cashier-less checkouts.Meanwhile, an exploding convergence of sensors and 3D printing will add even more value to in-person experience, as ultra-fast body scans enable 3D printers to create perfect products, on the spot, with zero waste.
And this is only where we are today. Over the next ten years, 3D printing will reshape retail in four key ways:
As a vestige of the agrarian economy, mothers made birthday cakes from scratch, mixing farm commodities (flour, sugar, butter, and eggs) that together cost mere dimes. As the goods-based industrial economy advanced, moms paid a dollar or two to Betty Crocker for premixed ingredients. Later, when the service economy took hold, busy parents ordered cakes from the bakery or grocery store, which, at $10 or $15, cost ten times as much as the packaged ingredients. Now, in the time-starved 1990s, parents neither make the birthday cake nor even throw the party. Instead, they spend $100 or more to “outsource” the entire event to party halls, or some other business that stages a memorable event for their kids—and often throws in the cake for free. Welcome to the emerging experience economy.By replacing premade ingredients with premade experiences, the experience economy is a new kind of disruptive business model satisfying a new kind of need.For most of history, we didn’t want pre-packaged experiences because life itself was the experience. Just staying safe, warm, and fed was adventure enough.
Technology changed that equation.At the turn of the Industrial Revolution, the richest people on the planet didn’t have air-conditioning, running water, or indoor plumbing. They lacked automobiles, refrigerators, and telephones. Let alone computers.Today, even folks living below the U.S. poverty line draw on these conveniences. Those better off draw on them much more. So much, in fact, that we’ve started to take our stuff for granted. As a result, for many, experiences—tactile, memorable, and real—have become more valuable than possessions.And retailers have capitalized on this trend.
Starbucks made bank extending the familiarity of the local coffee shop to a global scale. Outdoor retailer Cabela’s turned their showrooms into faux outdoor adventures complete with waterfalls. And now converging exponentials will take the experience economy to new heights.
Consider the Westfield shopping center group’s ten-year vision for the future of retail: “Destination 2028.” Replete with hanging sensory gardens, smart changing rooms, and mindfulness workshops, Westfield’s proposed shopping center will be a “hyper-connected micro-city” with an incredible amount of personalization. Smart bathrooms will provide individually customized nutrition and hydration tips; eye scanners and AI can personalize shopping “fast lanes” based on prior purchases; and magic mirrors will offer virtual reflections of you wearing an entire range of new products.
Combining entertainment, wellness, learning, and personalized product-matching, Westfield’s “Destination 2028” aims to help make you a better you—and they’re betting that this is worth the inconvenience of leaving home to do your shopping.It’s a big bet. In the U.S., there are over 1,100 malls and 40,000 shopping centers. Minnesota’s Mall of America is a small town, spanning 5.6 million square feet and housing 500 stores. China’s largest mall covers over 7 million square feet and is larger than the Pentagon. An upgraded experience economy might mean these malls have a chance of staying in business.But it’ll be a very different type of business.If successful, retail will become a convergent industry, where time spent at the mall pays multiple dividends. Shopping becomes healthcare becomes entertainment becomes education and so forth.
Or we take avenue B, whereby our malls become a distant memory, and shopping itself becomes another task outsourced to your AI.What’s coming next is astounding… Why drive when you can speak? Revenue from products purchased via voice commands is expected to quadruple from today’s US$2 billion to US$8 billion by 2023.VR, AR and 3D Printing are converging with AI, drones and 5G to transform shopping on every dimension. And as a result, shopping is becoming dematerialized, demonetized, democratized, and delocalized… a top-to-bottom transformation of the retail world.
BALLOONS DESIGNED TO EXTEND CONNECTIVITY, DELIVERING CONNECTIVITY
The Internet has transformed the way the world communicates, does business, learns, governs, and exchanges ideas, but not everyone can harness the benefits and advantages it provides. Right now, billions of people across the globe still do not have Internet access. They are completely left out of a digital revolution that could improve their finances, education, and health.Project Loon is a radical approach to expanding Internet connectivity. Instead of trying to extend the Internet from the ground, Loon takes to the sky via a network of balloons, traveling along the edge of space, to expand Internet connectivity to rural areas, fill coverage gaps, and improve network resilience in the event of a disaster.
The Loon team needed to design a balloon that could last for 100+ days in the stratosphere in order to deliver consistent connectivity. But, how do you test and design something that spends so much time in harsh conditions 20 kilometers in the air? To see the stratospheric effect on the balloons, the team brings the stratosphere down to earth by testing the balloons in a giant hanger that simulates sub-zero temperatures, high-speed winds, rains, and snow. The team also closely inspects each balloon with everything from mass spectrometers to soap bubbles in order to find the smallest leaks. Launching balloons that have never existed before posed a problem for the team: how do you get a lot of these balloons in the air quickly? To safely and reliably get the balloons up and operational, the team designed and custom-built Auto launchers — large cranes capable of filling and launching a balloon every 30 minutes into the stratosphere, above airplanes, birds, and the weather.
Winds in the stratosphere are stratified, which means they’re comprised of layers that travel in different directions and speeds. While one layer may cause the balloon to drift far from its target location, another nearby layer might allow the balloon to blow in the right direction. One of the original insights for the Loon team was to move the balloons up or down into helpful wind patterns to allow the balloons to sail with the winds, rather than fly against them. This “go-with-the-flow” technique allows the balloons to quickly and efficiently get in the right spot.To identify these helpful wind patterns, Loon uses advanced predictive models to create maps of the skies. The maps allow the team to determine the wind speed and direction at each altitude, time, and location. With these maps in place, the team then developed smart algorithms to help determine the most effective combination of stratospheric paths. With the aid of these algorithms, the balloons can accurately sail the winds over thousands of kilometers to get where they need to go and then manipulate wind paths to remain clustered around theses destinations. Loon has delivered connectivity to communities where the communications infrastructure has been damaged or wiped out. Loon partnered with Telefonica over many months in 2017 to provide basic Internet connectivity to tens of thousands of people across Peru who were displaced due to extreme rains and flooding. The Loon team also worked closely with AT&T and T-Mobile to bring the Internet to more than 200,00 people in Puerto Rico after Hurricane Maria made landfall.
In the fast-paced, complicated world of quantum chemistry, A.I.’s are used to help chemists calculate important chemical properties and make predictions about experimental outcomes. But, in order to do this accurately, these A.I. need to have a pretty strong understanding of the fundamental rules of quantum mechanics, and researchers of a new interdisciplinary study on the topic say these quantum predictions have been lacking for some time. A new machine learning framework could be the answer. While previous renditions of quantum-savvy A.I. algorithms have been useful, say the new study’s researchers, they have also failed to capture some of quantum chemistry’s most important characteristics in their prediction models. Namely, these previous models have neglected to account for electronic degrees of freedom in these trials which are the number of changing factors required to describe a specific state of a system.Quantum mechanics, famously, allows for states to simultaneously exist and not exist, and using degrees of freedom can help scientists better understand how to accurately and usefully describe a system. Without accounting for these degrees of freedom, previous A.I.’s have described these quantum chemistry experiments in more classical scalar, vector and tensor fields, which required much more calculation time and energy. Researchers of this new study have instead designed a framework that will describe them in the more quantumly accurate, and faster, form of ground-state wave functions. The study describing their approach was published last week in the journal Nature Communications.
One of the study’s authors, Reinhard Maurer from the Department of Chemistry at the University of Warwick, said in a statement that their algorithm’s combined flexibility and quantum know-how will help make it an important tool for quantum chemistry.
“This has been a joint three year effort, which required computer science know-how to develop an artificial intelligence algorithm flexible enough to capture the shape and behaviour of wave functions, but also chemistry and physics know-how to process and represent quantum chemical data in a form that is manageable for the algorithm,” said Maurer.
The study authors write that this deep-learning framework, called SchNOrb (which we can only imagine is as fun to pronounce as it looks), allows them to predict molecular orbits with “close to ‘chemical accuracy’” which in turn provides an accurate prediction of the molecules’ electronic structure and a “rich chemical interpretation” of its reaction dynamics.
The capabilities demonstrated by this algorithm would help chemists more effectively design “purpose-built molecules” for medical and industry use.
However, while the authors write that SchNOrb is proof that such an application is useful and feasible, the large number of atomic orbitals its able process also leaves it vulnerable to increased prediction errors as well. The authors write that the accumulation of these prediction errors eventually led to a bottleneck in the prediction process — in many ways the same kind of efficiency error they were trying to improve from previous approaches.In order to account for this problem, the authors write that in future studies they’ll need to learn more about and improve the neural network used in this study.That said, the authors are still confident that this preliminary research demonstrates a path forward toward more effective collaboration between quantum chemists and these quantum-savvy A.I.’s, and that this collaboration will become an essential part of the discovery process in years to come.
https://www.inverse.com/article/61111-ai-chemistry-quantum 21 Nov 2019
On 12th Nov 2019, thousands of flights were canceled or delayed, some areas struggled under a foot of snow and more than 200 million people were forecast to freeze as a historic Arctic air mass swept across a wide swath of the nation Tuesday. Freezing temperatures were reported from the Canadian border to South Texas. The freeze was moving east, headed for a swath from New England to Florida. Chicagoans awoke to single digits, a few inches of snow and a forecast high of 20 degrees that would smash the city's record for the date by 8 degrees. That's after an American Eagle flight slid off a runway Monday while landing at O'Hare International Airport. No injuries were reported. "The combination of air temperatures in the single digits and blustery northwest winds has sent wind chills below zero," the National Weather Service in Chicago warned. "Dress warmly if heading outside!"
Underground cities have long been sci-fi fodder, but now governments and planners are taking them seriously. One of the biggest challenges to overcome is convincing people to be comfortable underground.
Back in 1800 BC, the people of the Cappadocia region of modern-day Turkey decided their environment was so hostile – with extreme weather and the constant threat of war – that they dug an entire city underground. Derinkuyu, the oldest underground city still in existence, housed 20,000 people, providing schools, houses, shopping areas and places of worship protected by large stone doors which allowed each floor to be closed off separately.
In 2010, Helsinki, Finland, essentially took the same approach. The city council approved an underground master plan completed in 2019, that covers the city’s entire 214 square kilometers – combining energy conservation, shelter from the long, cold winter and an enormous prepper bunker in case of Russian aggression.
But it isn't just security and seasonal weather touted as reasons for living underground. Subterranean living offers an alternative to huge tower blocks and growing populations. Asmo Jaaksi, a partner at Helsinki architectural practice JKMM and the chief architect of the city’s underground Amos Rex Museum, says living underground conserves heat and may, for some, be one of the safest places as the climate emergency escalates. Helsinki has long pioneered underground living – the Temppeliaukio Church, designed by architects Timo and Tuomo Suomalainen was sunk into the city’s Toolo district in 1969 and, in 1993, the Itakeskus Swimming Hall – a large recreation centre that can handle 1,000 customers on an average day and converts into an emergency shelter with space for 3,800 people. “Helsinki stands on bedrock – a good foundations and very stable ground,” Jaaksi says. “The city is very overcrowded, and we have such long, dark and cold winters. Underground offers more room and connects us together away from the bad weather.” Ilkka Vähäaho, the head of Helsinki’s geotechnical division, agrees. Vähäaho says other main drivers for developing underground include: “the Finnish need to have open spaces even in the city centre – taking parts of the city underground would allow more open space on the surface.” This is where Helsinki’s plan could be pioneering a new attitude to subterranean living. With 60 per cent of the world’s population expected to be living in cities by 2050, meaning housing needs to be found for some 2.5 billion people, urban land is an increasingly limited resource. There is a practical limit to how high buildings can be built and due to space constraints, protected buildings or districts and green belts, many cities, such as Paris, Mexico City and Singapore are considering the answer lies not in more skyscrapers – instead, why not build down?
In 2017, Paris launched a competition, called Reinvent Paris 2, which asked designers to come up with uses for currently unused or under-used city-owned plots – most of which are underground. These include basements of historic buildings, tunnels freed up after cars were banned from the lower roads beside the Seine, unused reservoirs, old parking lots and former abattoirs. They have been turned into restaurants, shops, and a micro farm for edible insects.
Architects in Mexico City have taken another approach: proposing a 300m underground pyramid, dubbed the Earthscraper, that is planned to sit as a mini-city beneath Mexico City’s main square. However, a $800 million (£620m) price tag saw the plans shelved.
In Singapore, meanwhile, the government has already invested more than $188m (£146m) in engineering and research into underground construction and has modified its property rights laws so that all basements now belong automatically to the state. According to Singapore’s Department of Statistics, a population of 5.53m people share the island’s mere 719 square kilometers of land. This makes it the third most densely populated place on Earth. To date, the city-state has built upward – with apartment buildings reaching as high as 70 stories – whilst reclaiming land to push out the island’s coastline. But with projections for 1.5m more people in the next 15 years, Singapore’s options are as limited as its space.
The city is exploring the idea of an Underground Science Park 80 meters below surface, which will house 4.500 scientists and researchers in an earth scraper with underground developments for retail parks, green city infrastructures, highways, train lines and channels for air-conditioning pipe work. The design is cylindrical to withstand earthquakes.
Costs for the Park – estimated at twice the price of an equivalent surface build – has cooled Singapore’s ardour for that project. “It’s like making a model of a building – is it easier and cheaper to use paste wood sheets or to carve inside a tree stump?” argues Marcos Martínez Euklidiadas at the Carlos III University of Madrid’s engineering department. “When you build underground, you need to do everything you already do above ground then add the cost and effort of digging.
"It’s likely that the driver of this will be resilient cities especially as global warming disrupts weather – like Singapore with its complicated weather patterns and fear of Chinese aggression. The end result is likely to be built partially underground in order to avoid extreme conditions but with enough access to natural light.”
Even so, the last few years has seen the Singapore government start moving strategic essentials underground – including a deep cavern beneath the city used to store ammunition and the Jurong Rock Caverns beneath Singapore’s Jurong Island, the region’s first subterranean oil storage facility. Built by Hyundai Engineering and Construction, the underground structure has five huge caverns 100 metres deep and eight kilometres of tunnels to hoard hydrocarbons. In March, the government revealed it’s the first stages in its own underground master plan in three districts – Marina Bay, Jurong Innovation District and Punggol Digital District.
Speaking to the media at launch, Singapore’s chief planner Hwang Yu-Ning said that the plans will create spaces for the future, build capacity for growth and conserve the environment. She pointed to Singapore's first 230kV underground substation in the basement of a commercial building and credited the new centralised cooling system – that pumps chilled water through pipes to cool waterfront buildings around the Marina Bay for reducing energy consumption by around 40 per cent, helping the buildings slash their annual carbon dioxide emissions by 34,500 tons – equivalent to taking 10,000 cars off the road.
Energy conservation is one of underground’s big attractions for urban planners, according to Dale Russell, a professor in Innovation Design Engineering at the Royal College of Art. “Building underground saves space and, ultimately, power,” Russell explains. “The topography itself can generate energy, the rocks absorbing the sun’s heat in summer to keep the city cool, releasing it in winter like giant radiators to warm the earthscrapers. Within such cities, by 2069 we can imagine a complete self-contained travel and eco system underground, using hydroponic farming systems with artificial light to grow the city’s own food supply.”
The ground underneath Paris, for instance, has a high humidity and a constant temperature of 14C, whatever the outside weather. “If you build underground in very hot places like Dubai or very cold places like the Nordics your project will cost less in the long term – the temperature is stable so the cost of ventilation or heating is reduced,” says Gunnar Jenssen, head of underground psychology at Scandinavian research organisation SINTEF.
The problem, says Asmo Jaaksi, chief architect of Helsinki’s underground Amos Rex Museum, is not the construction itself – “it’s making people comfortable to go underground that we found complicated,” he explains. “The museum was running out of space and needed a new, larger building. The only option, a modernist cinema called the Rex, would not have been big enough so we dug out the square outside the Rex and sunk the museum beneath. We found people needed to feel connected to the surface somehow.”
Jaaksi’s solution was elaborate, artistic skylights that let natural light and curious gazes in. This isn’t always possible. In London, a former air raid shelter beneath Clapham Common has become the world’s first underground farm – Growing Underground – hydroponically growing salad rocket, garlic chives, wasabi mustard, fennel, pink stem radish and purple radish which it sells to M&S and Ocado. The growing light used, however, has a deep pink tinge which becomes uncomfortable after long exposure.
In New York, meanwhile, the Lowline Lab ran an experimental two year project from October 2015 to March 2017 growing plants underground using solar panels on city rooves to deliver natural light to the tunnels below. Having successfully grown over 100 different species of plant, the Lab aiming for its first underground green space in 2021.
11th Nov 2019
It is real and it is already happening. Human-caused climate change has already been proven to increase the risk of floods and extreme rainfall, heatwaves and wildfires with implications for humans, animals and the environment.
And things aren't looking good for the future either. With the concentration of carbon dioxide (CO2) in the atmosphere projected to maintain an average 411 parts per million (ppm) throughout 2019, there is a long way to go before the ambitious goals of the Paris Agreement are met. To put this into context: atmospheric CO2 hovered around 280 ppm before the start of the Industrial Revolution in 1750 – the 46 per cent increase since then is the main cause of global warming. Reliable temperature records began in 1850 and our world is now about one degree Celsius hotter than in the “pre-industrial” period.
The Paris Agreement focuses on keeping the global temperature rise in this century to well below two degrees Celsius above pre-industrial levels – ideally to 1.5 degrees Celsius – to avoid “severe, widespread and irreversible” climate change effects. But, if current trends continue, the world is likely to pass the 1.5 degrees Celsius mark between 2030 and 2052 unless it finds a way to reach net zero emissions.
Here's everything you need to know about where we are with the climate crisis.
San Francisco, British Columbia and Delhi all reported all-time record June temperatures this year, suggesting heatwaves are beginning anew in the Northern Hemisphere this summer. In 2018, the UK experienced the hottest summer since 2006 and a scientific study into last year’s data showed that such heatwaves are now 30 times more likely due to climate change.
And all of this is set to become much more common. There is a 12 per cent chance of average temperatures being as high as the UK experienced last year; this compares with a less than half a per cent chance that would be expected in a climate without human-caused climate change.
But the country is not only experiencing soaring temperatures in summer. Temperatures of 21.2 degrees Celsius were recorded in London’s Kew Gardens on February 26, 2019. It was the warmest winter day the UK has ever experienced. Parts of the country were hotter than Malibu, Barcelona and Crete. Milder winters can have detrimental effects on hibernating mammals, migratory birds and flowering plants.
Sea levels are rising at the fastest rate in 3,000 years, an average three millimetres per year. The two major causes of sea level rise are thermal expansion – the ocean is warming and warmer water expands – and melting of glaciers and ice sheets that increases the flow of water. Antarctica and Greenland hold enough frozen water to raise global sea levels by about 65 metres if they were to melt completely. Even if this scenario is unlikely, these ice masses are already melting faster. And island nations and coastal regions are feeling the impact.
Earlier this year, Indonesia announced its plans to move the capital city away from Jakarta. Home to over ten million people, some parts of Jakarta are sinking as much as 25cm per year. Jakarta’s precarious position is thanks to a combination of two factors – rising global sea levels and land subsidence as underground water supplies have been drained away to meet water needs.
The average size of vertebrate (mammals, fish, birds and reptiles) populations declined by 60 per cent between 1970 and 2014, according to the biennial Living Planet Report published by the Zoological Society of London and the WWF. That doesn't mean that total animal populations have declined by 60 per cent, however, as the report compares the relative decline of different animal populations. Imagine a population of ten rhinos where nine of them died; a 90 per cent population drop. Add that to a population of 1,000 sparrows where 100 of them died – a ten per cent per cent decrease. The average population decrease across these two groups would be 50 per cent even though the loss of individuals would be just 10.08 per cent.
Whatever way you stack the numbers, climate change is definitely a factor here. An international panel of scientists, backed by the UN, argues that climate change is playing an increasing role in driving species to extinction. It is thought to be the third biggest driver of biodiversity loss after changes in land and sea use and overexploitation of resources. Even under a two degrees Celsius warming scenario, five per cent of animal and plant species will be at risk from extinction. Coral reefs are particularly vulnerable to extreme warming events, their cover could be reduced to just one per cent of current levels at two degrees Celsius of warming.
In May, sensors at the Mauna Loa observatory in Hawaii – which has tracked Earth’s atmospheric concentration of CO2 since the late 1950s – detected a CO2 concentration of 415.26 ppm. The last time Earth's atmosphere contained this much CO2 was more than three million years ago, when sea levels were several metres higher and trees grew at the South Pole. Scientists have warned that carbon dioxide levels higher than 450ppm are likely to lock in catastrophic and irreversible changes in the climate. Around half of the CO2 emitted since 1750 has been in the last 40 years.
Earth Overshoot Day is a symbolic date on which humanity's consumption for the year outstrips Earth's capacity to regenerate those resources that year. The calculated date is getting earlier each year. It is July 29 in 2019; in 1999 it was September 29. The cost of this overspending includes deforestation, soil erosion, overfishing, and CO2 build-up in the atmosphere, which leads to global warming, more severe droughts, wildfires and other extreme weather events.
Dengue is the world’s fastest-growing mosquito-borne virus, currently killing some 10,000 people and affecting around 100 million per year. As global temperatures are rising, Aedes aegypti mosquitos that carry the disease could thrive in places that were previously unsuitable for them and benefit from shorter incubation periods. A recent study published in the scientific journal Nature warned that, in a warming world, dengue could spread to the US, higher altitudes in central Mexico, inland Australia and to large coastal cities in eastern China and Japan.
The number of floods and heavy rains has quadrupled since 1980 and doubled since 2004. Extreme temperatures, droughts and wildfires have also more than doubled in the last 40 years. While no extreme weather event is never down to a single cause, climate scientists are increasingly exploring the human fingerprints on floods, heatwaves, droughts and storms. Carbon Brief, a UK-based website covering climate science, gathered data from 230 studies into “extreme event attribution” and found that 68 per cent of all extreme weather events studied in the last 20 years were made more likely or more severe by human-caused climate change. Heatwaves account for 43 per cent of such events, droughts make up 17 per cent and heavy rainfall or floods account for 16 per cent.
Extreme weather is driving up demand for energy. Carbon emissions from global energy use jumped two per cent in 2018, according to BP’s annual world energy study. This was the fastest growth in seven years and is roughly the carbon equivalent to increasing the number of passenger cars worldwide by a third. The unusual number of hot and cold days last year resulted in increased use of cooling and heating systems powered by natural gas and coal. The energy sector accounts for two-thirds of all carbon emissions.
The world’s tropical forests are shrinking at a staggering rate, the equivalent of 30 football pitches per minute. Whilst some of this loss may be attributed to natural causes such as wildfires, forest areas are primarily cleared to make way for cattle or agricultural production such as palm oil and soybeans. Deforestation contributes to global carbon emissions because trees naturally capture and lock away carbon as they grow.
When forest areas are burnt, carbon that took decades to store is immediately released back into the atmosphere. Tropical deforestation is now responsible for 11 per cent of the world’s CO2 emissions – if it were considered a country, tropical deforestation would be the third-largest emitter after China and the US.
There are about 210,000 electric vehicles in the UK. Although there is a steady growth in demand, only two per cent of households own a hybrid and just one per cent have an all-electric car. The UK has set a net zero target for transport emissions meaning all cars and vans on its roads will have to be all-electric by 2050, but if the country is to stand any chance of achieving these ambitious plans, tens of millions of petrol and diesel cars will have to be replaced.
In a recent letter to the Committee on Climate Change, experts warned that, based on the latest battery technology, the UK will need to import almost as much cobalt as is consumed annually by European industry, three quarters of the world’s lithium production, nearly the entire global production of neodymium, and at least half of the world’s copper production. There are currently 31.5m cars on UK roads, covering more than 400 billion kilometers per year.
Acknowledgement- www.wired.co.uk
What it is: Japan is now working to revamp the Fukushima nuclear meltdown zone to once again produce electricity, but this time using solar and wind power. Thanks to a loan from the state-run Development Bank of Japan and the Mizuho Bank, the region will soon produce about 600 megawatts of electricity, courtesy of 11 new solar plants and 10 new wind farms. With expected completion in March of 2024 at a cost of $2.7 billion, the power plants are predicted to generate enough power for about 114,000 average American homes.
Its Important- Nearly 43,000 Japanese citizens remain displaced by the Fukushima disaster, while about 143 square miles of the prefecture stand in a permanent evacuation zone. Yet Japan now seeks to capitalize on this seeming “dead zone,” leveraging the expanse of uninhabitable land to power residential regions. Contributing to the prefecture’s goal of achieving 100 percent renewable energy-derived power by 2040, this power infrastructure will help pave the way for similar initiatives worldwide.
What it is: In partnership with the Kansas Department of Transportation, drone startup Iris Automation has successfully completed the first FAA-approved BVLOS (“beyond the visual line of sight”) drone flight. Until now, the FAA and most other jurisdictions have required human observers and on-ground radar systems for testing of new services, costing companies up to $50 million and thereby hindering development of viable drone services. Yet with newfound FAA approval, Iris Automation utilized solely on board detect-and-avoid systems. Showed 95% avoiding head on collision.
Its important:We’re now seeing a massive surge in the development rate and approval of autonomous drone use for delivery of critical supplies and commerce. Meanwhile, numerous regulatory agencies—including state-level government departments in even technologically lagging regions—continue to define and refine the right guidelines of operation. As the immediacy of retail interactions, aid delivery, and small-scale cargo transit continues to skyrocket, expect the proliferation of drone manufacturers, complex sensors, and AI navigation software systems.
What it is: A new study published in the Journal of Chemical Information and Modeling suggests that more than 1 million chemical look-alikes might encode biological information, as does DNA. So far, DNA, RNA, and a few man-made molecules are the only known nucleic acids capable of linking up, storing and relaying data, depending on their sequence. By designing a computer program that can generate chemical formulas, researchers at Emory University tested countless generated molecules to determine whether they resembled nucleotides. A surprise to everyone, their results identified over 1,160,000 molecules that could couple up in distinct pairings and assemble in a line, akin to DNA and RNA.
Why it’s important:Prompting us to fundamentally rethink optimal means of genetic data conveyance, this discovery has vast new implications. As a number of current drugs resembling nucleotides are effective in combating viruses and some malignant cancer cells, the team’s generated list could pave the way for novel pharmaceutical products. Within evolutionary biology, the finding that DNA and RNA have plenty of company may yield new truths about how life first evolved on Earth .
IoT – the concept which did not even existed a decade ago has today not just gotten mainstream but have also marked a presence, across industries, across the globe. With its market size poised to be 1.6 trillion by the time we reach 2025, entrepreneurs and businesses from all corners are finding opportunities to enter the segment..
And since we are well past the stage where we address what is IoT, how IoT works, and what are the advantages of IoT let us jump straight to what its future holds in the world.
But before that, here’s a quick synopsis of the IoT market to set the tone of the article.
Even people who discarded smart homes as devices for pretentious youngsters are now finding it difficult to ignore the capabilities the technology comes with. While it started with a steady growth, the demand for connected home devices will see a sharp rise in the years to come.
This demand, in turn, will increase the need for the manufacturing of connected electronic devices.
For a long time, the IoT devices have been relying on the cloud for storing their data. But the IoT application development industry has now started wondering about the implications of utilizing storing, calculation, and analyzing data to limit.
They are demanding that instead of sending the data from IoT devices to cloud, the data should first be transferred to local device which is closer to the edge of network. This local storage helps in sorting, filtering, and calculating the data and sending a part or the whole data to the cloud, thus reducing the traffic to network.
Edge computing offers a series of benefits to an IoT mobile app development company and developers, which makes it one of the key emerging trends in IoT –
With the adoption being on a rise, more and more devices are getting connected to the Internet of Things. And as the network is expanding, the volume of data is also expanding and there is more information which is at risk. This greater use of IoT tools and technologies must be accompanied with a greater boost to IoT security awareness and training. The year to come will see machine to machine authentication overlapping, biometric logins becoming a norm, and technologies like machine learning, big data, and artificial intelligence being used to eliminate data infractions.
There was once a time when the goal of IoT was to collect data coming in from multiple sources. But today, the intent has become to not just collect data but also extract useful information from it.
Incorporation of next gen technologies like Big Data, Artificial Intelligence, and Machine Learning will define the trends and implications of IoT in the coming time.
Using the data analytics tools in the connected devices, businesses will be able to take decisions around both predictive and preventive measures.
Although an extension on the security section, the integration of IoT and Blockchain deserves a special attention.
There are some pestering issues that IoT faces which have been adversely affecting the overall IoT growth trends and its mass adoption – Scalability, High Cost, Security, etc. being a few of them. A majority of these issues can be traced back to the centralized network.
Being centralized, there is minimal to zero guarantee in terms of security, especially since the data is held by a specific party.
Blockchain, being decentralized eliminates the issues of lack of security and control being with one party. The plethora of benefits that the combination of IoT and Blockchain offers makes it one of the important IoT technology trends.
What has been lying in the background for some years now, Smart Cities will become a thing of the reality in the time to come. 2020 and the years to come will see the inception of several internet of things applications, which would be directed at improving environmental, social, and financial elements of urban living. The IoT tech spending efforts towards smart cities, which is anticipated to reach $80 billion by 2050 will become a prime ingredient in the goal to improve the quality of living and sustainability.
It is impossible to talk about the technology without mentioning the benefits of cloud computing in IoT. By making data accessible in real-time, SaaS will find itself getting explored by a number of business as part of IoT trends 2020.
The time to come will see more cloud vendors coming in the picture for becoming an active part of the dependency that mass IoT adoption will bring alongside.
The absence of a unified IoT framework is something that has been a major challenge for the IoT industry for a long time. The fact that not many companies work around a shared central platform, affects the adoption process to a huge extent.
Solving the issue through Blockchain will be one of the major IoT trends 2020. Blockchain driven ecosystem will bring all the data in one place with a decentralized operation model where the information will not be in control of any one entity.
The time to come will see IoT being used for not just personal or consumer based use but also industrial use. A validation of this can be seen in the numbers that the IoT connecting devices were set to grow to over 3.7 billion by 2019 and to over 50 billion by the time next year ends.
While the consumer based adoption of IoT is well talked about, we will look into the industrial internet of things in much detail in the next section.
As of now, the future of IoT lies in worldwide adoption.
One of the major IoT Mobile App Development Trends will be seen in efficient tracking of location and wireless sensing.
With 5G prepared to come on the forefront, apps will get new capabilities on these fronts. According to Gartner, wireless sensing will be used for the creation of drone and virtual assistants, in addition to being helpful in object recognition, and medical diagnostics.
There are several use cases of how is IoT shaping up the future of a better customer service. Operative majorly at the back of IoT and Big Data convergence, customer experience is poised to become a lot more personalized in the coming time. The combination of both is what would help in achieving true omni-channel customer experience, which is a need of every modern day business, irrespective of what industry they belong to.
The next-gen manufacturing tools will make use of built-in sensors and advanced programming for performing predictive analytics and forecasting the potential issues much before they actually happen.
This will not just lower the downtime but the data based predictive analytics will also eliminate guessing game from the preventative maintenance strategies. It will enable engineers to schedule and then initiate maintenance when the machines are dormant.
The 12 IoT technology trends for 2020 that you just read are only the tip of the iceberg.
6th Nov 2019
3D-printed tissues and organs could revolutionize transplants, drug screens, and lab models—but replicating complicated body parts such as gastric tracts, windpipes, and blood vessels is a major challenge. That’s because these vascularized tissues are hard to build up in traditional solid layer-by-layer 3D printing without constructing supporting scaffolding that can later prove impossible to remove.
One potential solution is replacing these support structures with liquid—a specially designed fluid matrix into which liquid designs could be injected before the “ink” is set and the matrix is drained away. But past attempts to make such aqueous structures have literally collapsed, as their surfaces shrink and their structures crumple into useless blobs.
So, researchers from China turned to water-loving, or hydrophilic, liquid polymers that create a stable membrane where they meet, thanks to the attraction of their hydrogen bonds. The researchers say various polymer combinations could work; they used a polyethylene oxide matrix and an ink made of a long carbohydrate molecule called dextran.
They pumped their ink into the matrix with an injection nozzle that can move through the liquid and even suck up and rewrite lines that have already been drawn. The resulting liquid structures can hold their shape for as long as 10days before they begin to merge, the team reported last month in Advanced Materials.
Using their new method, the researchers printed an assortment of complex shapes—including tornadoesque whirls, single and double helices (above), branched treelike shapes, and even one that resembles a goldfish. Once printing is finished, the shapes are set by adding polyvinyl alcohol to the inky portion of the structure. That means, the scientists say, that complex 3D-printed tissues made by including living cells in the ink could soon be within our grasp
Now Boston Dynamics’ nimble four-legged robot, Spot, is available for companies to lease to carry out various real-world jobs, a sign of just how common interactions between humans and machines have become in recent years.
And while Spot is versatile and robust, it’s what society thinks of as a traditional robot, a mix of metal and hard plastic.
Many researchers are convinced that soft robots capable of safe physical interaction with people – for example, providing in-home assistance by gripping and moving objects – will join hard robots to populate the future.
Soft robotics and wearable computers, both technologies that are safe for human interaction, will demand new types of materials that are soft and stretchable and perform a wide variety of functions.
Making materials intelligent
This idea that the material is the machine can be captured in the concept of embodied intelligence. This term is usually used to describe a system of materials that are interconnected, like tendons in the knee.
When running, tendons can stretch and relax to adapt each time the foot strikes the ground, without the need for any neural control.
It’s also possible to think of embodied intelligence in a single material – one that can sense, process and respond to its environment without embedded electronic devices like sensors and processing units.
A simple example is rubber. At the molecular level, rubber contains strings of molecules that are coiled up and linked together.
Stretching or compressing rubber moves and uncoils the strings, but their links force the rubber to bounce back to its original position without permanently deforming. The ability for rubber to “know” its original shape is contained within the material structure.
Since engineered materials of the future that are suitable for human-machine interaction will require multifunctionality, researchers have tried to build new levels of embodied intelligence – beyond just stretching – into materials like rubber. Recently, my coworkers created self-healing circuits embedded in rubber.
They started by dispersing micro-scale liquid metal droplets wrapped in an electrically insulating “skin” throughout silicone rubber. In its original state, the skin’s thin metal oxide layer prevents the metal droplets from conducting electricity.
However, if the metal-embedded rubber is subjected to enough force, the droplets will rupture and coalesce to form electrically conductive pathways.
Any electrical lines printed in that rubber become self-healing. In a separate study, they showed that the mechanism for self-healing could also be used to detect damage.
New electrical lines form in the areas that are damaged. If an electrical signal gets through, that indicates the damage.
The combination of liquid metal and rubber gave the material a new route to sense and process its environment – that is, a new form of embodied intelligence.
The rearrangement of the liquid metal allows the material to “know” when damage has occurred because of an electrical response.
Shape memory is another example of embodied intelligence in materials. It means materials can reversibly change to a prescribed form.
Shape memory materials are good candidates for linear motion in soft robotics, able to move back and forth like your bicep muscle. But they also offer unique and complex shape-changing capabilities.
For example, two groups of materials scientists recently demonstrated how a class of materials could reversibly transform from a flat rubber-like sheet into a 3-D topographical map of a face.
It’s a feat that would be difficult with traditional motors and gears, but it’s simple for this class of materials due to the material’s embodied intelligence.
The researchers used a class of materials known as liquid crystal elastomers, which are sometimes described as artificial muscles because they can extend and contract with the application of a stimulus like heat, light, or electricity.
By drawing inspiration from the liquid metal composite and the shape-morphing material, my colleagues and I recently created a soft composite with unprecedented multifunctionality.
It is soft and stretchable, and it can conduct heat and electricity. It can actively change its shape, unlike regular rubber. Since our composite easily conducts electricity, the shape-morphing can be activated electrically.
Since it is soft and deform able, it is also resilient to significant damage. Because it can conduct electricity, the composite can interface with traditional electronics and dynamically respond to touch.
Furthermore, our composite can heal itself and detect damage in a whole new way. Damage creates new electrically conductive lines that activate shape-morphing in the material. The composite responds by spontaneously contracting when punctured.
In the movie “Terminator 2: Judgment Day,” the shape-shifting android T-1000 can liquify; can change shape, color, and texture; is immune to mechanical damage; and displays superhuman strength.
Such a complex robot requires complex multi functional materials. Now, materials that can sense, process and respond to their environment like these shape-morphing composites are starting to become a reality.
But unlike T-1000 these new materials aren’t a force for evil – they’re paving the way for soft assistive devices like prosthetics, companion robots, remote exploration technologies, antennas that can change shape and plenty more applications that engineers haven’t even dreamed up yet.
Michael Ford, Postdoctoral Research Associate in Materials Engineering, Carnegie Mellon University. This article is republished from The Conversation under a Creative Commons license.
Last February, OpenAI, an artificial intelligence research group based in San Francisco, announced that it has been training an AI language model called GPT-2, and that it now “generates coherent paragraphs of text, achieves state-of-the-art performance on many language-modelling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarisation – all without task-specific training”.
If true, this would be a big deal. But, said OpenAI, “due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”
Given that OpenAI describes itself as a research institute dedicated to “discovering and enacting the path to safe artificial general intelligence”, this cautious approach to releasing a potentially powerful and disruptive tool into the wild seemed appropriate. But it appears to have enraged many researchers in the AI field for whom “release early and release often” is a kind of mantra. After all, without full disclosure – of program code, training dataset, neural network weights, etc – how could independent researchers decide whether the claims made by OpenAI about its system were valid? The replicability of experiments is a cornerstone of scientific method, so the fact that some academic fields may be experiencing a “replication crisis” (a large number of studies that prove difficult or impossible to reproduce) is worrying. We don’t want the same to happen to AI.
If the row over GPT-2 has had one useful outcome, it is a growing realisation that the AI research community needs to come up with an agreed set of norms about what constitutes responsible publication (and therefore release). At the moment, as Prof Rebecca Crootof points out in an illuminating analysis on the Lawfare blog, there is no agreement about AI researchers’ publication obligations. And of all the proliferating “ethical” AI guidelines, only a few entities explicitly acknowledge that there may be times when limited release is appropriate. At the moment, the law has little to say about any of this – so we’re currently at the same stage as we were when governments first started thinking about regulating medicinal drugs.
In the 35-year span since the launch of the first Terminator movie, a variety of technological advancements in AI and robotics have brought elements of "Terminator" closer to reality. Artificial intelligence experts are confident, however, that the kind of independent AI and humanoid robots of the movie franchise are still far off.
But they also offer a warning: the developments that people have made in AI and military technology could create their own kind of "Judgement Day."
"AI is a powerful technology, but it’s a tool, not unlike a pencil," Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, told NBC News. "How it’s used is in the hands of people."
AI may not yet boast self-awareness, but it already rivals and in some cases surpasses human intelligence across a range of applications, including reading CT scans and spotting shoplifters as well as helping self-driving cars navigate crowded cities. Developers have not have made artificially intelligent machines to look like Arnold Schwarzenegger, but they can at least make one sound exactly like podcast host Joe Rogan to the point where it can fool human listeners.
Panicking at this stage of the technology's development, Etizoni said, "is like being worried about overpopulation on Mars before we even have gotten a person on Mars."
When might we feel the need to push the panic button? Estimates vary wildly. Some experts interviewed by NBC News predict the singularity — roughly defined as the time when an artificial intelligence will surpass human intelligence and be able to evolve autonomously — will arrive as soon as 15 years from now. Others say it will be closer to a century.
One point that everyone agrees on, however, is a computer will eventually surpass its creators. And when it does, it's not clear it will be possible to program enough safeguards for humans to remain the apex programmers.
VR systems are getting more advanced, but they're still primarily available on niche hardware and software. That could be about to change, with the latest beta version of Google's Chrome browser supporting web-based VR.
The beta of Chrome 79 reveals the details about the support for web-based virtual reality (VR) experiences. Developers can create websites with content including games, 360-degree videos and immersive art using the WebXR Device API, with controllers supported by the GamePad API. Sites can be displayed on a smartphone or on a head-mounted display such as an Oculus Quest.
Support for web-based VR content will be coming to other Chromium-powered browsers in addition to Chrome soon, including Firefox Reality, Oculus Browser, Edge and Magic Leap's Helio. In the future, Chrome and other browsers will support augmented reality features as well.
You can download the beta of Chrome 79 now, or wait for the features to make their way to the main Chrome browser on December 10th.
The facility lies midway between Munich’s city center and its international airport, roughly 23 miles to the north. From the outside, it still looks like the state-run farm it once was, but peer through the windows of the old farmhouse and you’ll see rooms stuffed with cutting-edge laboratory equipment.
When Kessler unlocks one pen to show off its resident, a young sow wanders out and starts exploring. Like other pigs here, the sow is left nameless, so her caregivers won’t get too attached. She has to be coaxed back behind a metal gate. To the untrained eye, she acts and looks like pretty much any other pig, but smaller.
It’s what’s inside this animal that matters. Her body has been made a little less pig-like, with four genetic modifications that make her organs more likely to be accepted when transplanted into a human. If all goes according to plan, the heart busily pumping inside a pig like this might one day beat instead inside a person.
Different types of tissues from genetically engineered pigs are already being tested in humans. In China, researchers have transplanted insulin-producing pancreatic islet cells from gene-edited pigs into people with diabetes. A team in South Korea says it’s ready to try transplanting pig corneas into people, once it gets government approval. And at Massachusetts General Hospital, researchers announced in October that they had used gene-edited pig skin as a temporary wound covering for a person with severe burns. The skin patch, they say, worked as effectively as human skin, which is much harder to obtain.
But when it comes to life-or-death organs, like hearts and livers, transplant surgeons still must rely on human parts. One day, the dream goes, genetically modified pigs like this sow will be sliced open, their hearts, kidneys, lungs and livers sped to transplant centers to save desperately sick patients from death.
We’re still not at the point where autonomous vehicle systems can best human drivers in all scenarios, but the hope is that eventually, technology being incorporated into self-driving cars will be capable of things humans can’t even fathom — like seeing around corners. There’s been a lot of work and research put into this concept over the years, but MIT’s newest system uses relatively affordable and readily available technology to pull off this seemingly magic trick.
MIT researchers (in a research project backed by Toyota Research Institute) created a system that uses minute changes in shadows to predict whether or not a vehicle can expect a moving object to come around a corner, which could be an effective system for use not only in self-driving cars, but also in robots that navigate shared spaces with humans — like autonomous hospital attendants, for instance.
This system employs standard optical cameras, and monitors changes in the strength and intensity of light using a series of computer vision techniques to arrive at a final determination of whether shadows are being projected by moving or stationary objects, and what the path of said object might be.
In testing so far, this method has actually been able to best similar systems already in use that employ lidar imaging in place of photographic cameras and that don’t work around corners. In fact, it beats the lidar method by over half a second, which is a long time in the world of self-driving vehicles, and could mean the difference between avoiding an accident and, well, not.
An otherwise peaceful suburban neighborhood in Washington Township, Pa., began experiencing a series of explosions this past spring and summer. Homemade bombs were blowing up in front yards. Nails were raining down from the sky. Windows were left riddled with marks, as if they had been shot at.
For a while, the police were mystified. They could find no clues to the identity of the bomber, and they were confused about how the perpetrator could leave no footprints, tire tracks or DNA behind.
Only after a resident’s security camera caught a glimpse of what was going on did they crack the case. The perpetrator, it turns out, was a drone, one that the authorities say was controlled by a man who is now behind bars, accused of serious felonies.
Drones pose novel and difficult problems for law enforcement. They are widely available, lightly regulated and can be flown remotely by an operator far away from the crime scene. They have already been put to a host of nefarious uses, from smuggling contraband into prisons to swarming F.B.I. agents who were preparing for a raid. And local and state authorities are restricted by federal law from intercepting drones in flight, potentially even when a crime is in progress, though experts say that has yet to be tested in court.
There have long been concerns about the use of drones for smuggling. The Border Patrol caught two people flying 28 pounds of heroin over the border near Calexico, Calif., in 2015. In July, a man pleaded guilty to attempting to use an unregistered drone to smuggle a bag of marijuana into Autry State Prison in Pelham, Ga.
Drones are not easy to detect in flight, as the Secret Service found when one flew unnoticed over the White House grounds and crash-landed on the lawn in 2015.
Audio sensors can listen for the distinctive sound of a drone, but that method does not work well in urban areas, and a drone’s sound signature can be altered by changing its propellers. Cameras have limited reach and may not be able to tell a drone from a bird. Commercially manufactured drones are typically made largely of plastic and run on battery power, so they do not give off much heat or show up strongly on radar. Picking up a drone’s radio signal is considered the most reliable way to detect one — but that does not mean the drone is easy to catch.
What about using jamming systems or other technology to interfere with drones in flight or keep them from flying where they do not belong? The only agencies allowed to do that are the federal departments of Defense, Justice, Energy and Homeland Security. For everyone else, it is illegal in all but the most exceptional circumstances — and so is taking down a drone in flight.
“The consensus is, no one has cracked the code on countering drones,” Mr. Holland Michel said. “It’s an unresolved challenge.”
Now a days sequencing human genome takes around an hour at an approx. cost of 100 USD, however first genome sequencing cost approx. 2.7 billion USD and around 13 years of research. The benefit we will achieve with the progress of genetic sequencing is that the food and work out requirements can be fully customized and personalized. We can predict the diseases that can develop and take predictive measures, understand our system friendly bacteria and how we can provide protection to them while having bactericidal effect on non-friendly bacteria.
In a human body we have around 3.2 billion DNAs contributed by both the parents each equal number. These DNAs make up around 30 to 40 trillion cells which forms our body and their function determines our health. So the DNA or genome are the program codes that makes one different from one another. Ones personality, behavior, diseases immunity, body color, hair, eye color and full structure and make up depends on the genome. Companies like Illumina are making the genome sequence at a very competitive cost of approx. 100 USD and around an hour time.
N-of-1 Medicine
As we discussed there are 2 approaches in genetic approaches, one being CRISPR and the other being stem cell therapy, what is more significant to mention over here is that the synergistic effect or the convergence of this exponential technology that is going to bring real value to the personalized healthcare approach. That’s what is called as N-of-1 Medicine. The diagnosis and treatment regimens are exclusively designed for You, your genome, transcriptome, proteome, microbiome etc.
As AI along with Machine learning is changing the perspective of Optimizing Asset management and maintenance in other industries the same way personalized genome understanding is helping to optimize the healthcare of a person. Sometimes back I have read an article “Death of Death” where the authors have discussed the predictive way of genome understanding will make most of disease fully addressable from the stage of even before attacking an individual. For you and only you What is the perfect food, perfect medicines, perfect exercises, what diseases you are susceptible and how the same can be prevented, all these can be predicted with genome understanding. Huge interest has been observed in the field of human genomics as more and more investments are going towards personalized medicines.
There are 2 primary approach in genome sequencing, one is gene editing technique to repair the DNA inside a cell and the other one being the stem cell therapy to replace the cell entirely. To treat genetic diseases CRISPR Cas9, the engineering tool allows to locate the defective DNA sequence and edit the genetic code and rewrite that DNA. CRISPR has become the mainstay of gene editing tool of the last five six years, as its cheaper, fast and easy to use. More recently CRIPSR 2.0 by Harvard Medical scientists discussed the NexGen CRISPR to target a single letter and edit the same in a string of single DNA. “Of more than 50,000 genetic changes currently known to be associated with disease in humans,” David Liu, the Harvard chemical biologist who led the work, told the LA Times, “32,000 of those are caused by the simple swap of one base pair for another.” This justifies a single letter editing in a mass of 3.2 billion letters.
Designer babies is possible by genetic editing of embryo cells through the application Human Germline Engineering. “With the advent of efficient, easy-to-use genome editing by CRISPR–Cas9, editing human embryos is now possible, providing tremendous opportunities to study gene function and cell fate in early human development. The technique can also be used to modify the human germline. Unresolved questions about pre-implantation human development could be addressed by basic research using CRISPR–Cas9.”
https://www.nature.com/articles/s41556-019-0424-0.
The abstract as published in the Nature has discussed many ethical angle to the research associated with this. Very recently scientists have genetically engineered cocaine resistance into mice, discovered the muscular dystrophy gene responsible in dogs and in the process of developing personalized cancer therapies for humans. Through gene editing, a new breed of mosquito has been created which cant reproduce.
An Robot has debated the dangers of AI – narrowly convincing audience members that the technology will do more good than harm.Project Debater, a robot developed by IBM, spoke on both sides of the argument, with two human teammates for each side helping it out. Talking in a female American voice to a crowd at the University of Cambridge Union on Thursday evening, the AI gave each side’s opening statements, using arguments drawn from more than 1100 human submissions made ahead of time.On the proposition side, arguing that AI will bring more harm than good, Project Debater’s opening remarks were darkly ironic. “AI can cause a lot of harm,” it said. “AI will not be able to make a decision that is the morally correct one, because morality is unique to humans.”
“AI companies still have too little expertise on how to properly assess data sets and filter out bias,” it added. “AI will take human bias and will fixate it for generations.”
The AI used an application known as “speech by crowd” to generate its arguments, analysing submissions people had sent in online. Project Debater then sorted these into key themes, as well as identifying redundancy – submissions making the same point using different words.
The AI argued coherently but had a few slip-ups. Sometimes it repeated itself – while talking about the ability of AI to perform mundane and repetitive tasks, for example – and it didn’t provide detailed examples to support its claims.While debating on the opposition side, which was advocating for the overall benefits of AI, Project Debater argued that AI would create new jobs in certain sectors and “bring a lot more efficiency to the workplace”.But then it made a point that was counter to its argument: “AI capabilities caring for patients or robots teaching schoolchildren – there is no longer a demand for humans in those fields either.”
The pro-AI side narrowly won, gaining 51.22 per cent of the audience vote.Project Debater argued with humans for the first time last year, and in February this year lost in a one-on-one against champion debater Harish Natarajan, who also spoke at Cambridge as the third speaker for the team arguing in favour of AI.IBM has plans to use the speech-by-crowd AI as a tool for collecting feedback from large numbers of people. For instance, it could be used by governments seeking public opinions about policies or by companies wanting input from employees, said IBM engineer Noam Slonim.“This technology can help to establish an interesting and effective communication channel between the decision maker and the people that are going to be impacted by the decision,” he said.
Alphabet subsidiary X, which is the former Google X, focuses exclusively on ambitious “moonshots,” or applications of tech you might expect are science fiction, not a real product in development. Like a robot that can sort through office trash.
X does a lot of its work more quietly than other Alphabet companies — until it’s ready to share some of its progress. It has reached that point with the Everyday Robot Project, an ongoing effort that X has been working on for “the past couple of years,” according to project lead Hans Peter Brondmo, who in a Medium post today shed some light on what the project is and what it does.
Brondo compares robotics today to computing in the 1950s and 60s — it’s a working reality, but it’s happening in dedicated spaces and the only people interacting with them on the regular are specially trained computer operators, using them for professional purposes. The challenge, then, is to usher in an era of robotics akin to the era of consumer computing — in other words, how do we get to a world where ordinary people live and interact with robots every day?
The challenges are both more mundane and more complex than you might imagine: They have everything to do with stuff we take for granted every day, like other people walking around, trash bins that are out at the curb one day and gone the next, furniture that moves around, different weather conditions and just about anything you can think of that’s a pretty normal part of everyday life but hard to predict exactly day-to-day. Robots work best with specificity and exactness, especially when it comes to programming.
The Everyday Robot Project knew this, and quickly determined that to create robots that are genuinely useful to actual people going about their lives, the key was to “teach” rather than “program,” according to Brondmo. That meant working with the team at Google AI, first in a lab setting, and then out in the world. That’s where it arrived at the robot it’s detailing today: One it successfully taught to sort through garbage at X’s own offices.
The robot, trained via simulation and reinforcement learning, among other techniques, managed to actually reduce the level of waste contamination (putting the wrong garage in the wrong place and causing the whole contents of that bin to go to the landfill instead of being recycled, for instance) from around 20% to less than 5. If you’ve ever worked in a building that is certified as green by some kind of officially recognized standard, then you might know how impressive this actually is in terms of overall impact.
Aside from actually making a significant dent in the amount of unneeded waste heading to a landfill from a sizeable office, this development helps X prove out some of the feasibility of its ultimate goal of making robots everyday affairs for most people. There’s still a long ways to go before robots are commonplace companions — the smartphones we carry around everywhere, in the general computing analogy — but this is a step in that direction.
DGTAL.AI Inc.16192,Coastal Highway,Lewes,Delaware
UN identified Sustainable Development Goals(SDG) , which need attention of world communities, Technological innovations offer great solution to address these. We at DGTAL.AI discuss SDGs and Technological solutions under single platform for the benefit of tech enthusiasts and talents and add value towards the solutions, thereby serving the cause of a billion people.DGTAL.AI is not for profit initiative.
Copyright © 2020, DGTAL.AI Inc.
DGTAL.AI News & Views of Exponential Tech world