The Disruption of Energy Storage – Guest post by Brandon Ng

While press and overall hype on energy storage has increased notably – hardly an article on renewable energy is written without the mention of energy storage – energy storage is not an intrinsically new or novel concept. Pumped storage hydro (PSH) stations – where water is pumped up an incline to into a large reservoir or body of water when the supply of energy exceeds demand, and released through turbines when demand for energy exceeds supply – have been around since the late nineteenth-century.

Pumped Storage Hydro

The predominant issues with PSH stations are twofold:

  1. its economics only make sense when done at a grid-wide, utility scale; and
  2. in order for PSH stations to be cost competitive, they can only be developed in areas where the necessary topographical features – landscape height and availability of water – already exist.

Energy Storage Systems

As such, the ‘revolution’ in energy storage today isn’t its invention as much it is the widespread adoption of ‘micro’ energy storage systems (ESSs) without geographic limitations and/or by non-utilities: the industrial, commercial and residential sectors.

The most obvious and widely documented application of such ‘micro’ ESSs is in the continuous provision of energy generated by rooftop solar PV panels or small wind turbines. It could be said that both these forms of generating renewable energy are novelties the developed world, where grid electricity is for the most part, reliable and affordable (relative to incomes). However for many communities that lack access to grid-electricity altogether, usually for geopolitical or economic reasons, solar and/or wind-based power generation are the only cost effective means of electrification. It is in such communities where energy storage systems are both necessary and transformative in the provision of continuous electricity, one of the pillars of the modern first-world.

In parts of the world where reliable, grid-based electricity is prevalent, whether the market for ESSs shifts from utility-scale ESSs for power providers to ‘micro’ ESSs in the industrial, commercial and residential sectors remains to be seen. If the landscape for ESS products does change, two major factors are likely to cause such a shift: government policy and the stance which utilities take on innovation, be it on business model or on technology.

Intra-day Pricing

Take the concept of variable intra-day pricing of electricity for example: the notion that the value of electricity varies within a 24 hour day as a reflection of the ever changing relationship between supply and demand for electricity. Yet fixed-rate tariffs – often linked to prescriptive government policies – are still a pervasive (although not universal) feature across many major market segments. In implementing such price control mechanisms, the impetus to deploy energy generation or storage assets to match real time supply against demand shifts to the power providers.

If, on the other hand, the intra-day price of electricity is allowed to vary in correlation to its intrinsic value (as determined by market economics), then a case for grid-connected ESSs that are deployed by the end users themselves, may exist. Such ESSs would allow users to purchase power during off-peak periods when it is most economically favourable to do so, and consume (or even sell back to the utility operators) during peak periods, when it energy is most valuable. This process is called ‘time-shift arbitrage’. At risk of grossly oversimplifying a complex issue, the implementation of variable intra-day energy pricing in market segments where it is not already adopted is likely to be driven by government policies as a means of reducing energy consumption and the nation’s carbon footprint.

Market changes

A host of technical and structural market changes are of course, required before this can be universally realised, two obvious ones being:

  1. an increase in the overall price of energy (either due to high commodity prices or the pricing in of the environmental impact of energy generation); and/or
  2. either a decrease in the cost or an increase in the finite service life of today’s ESSs.

Great strides have been made by policy makers and energy storage companies on both, although the value proposition offered by ESSs as an enabler for time-shift arbitrage is still today, limited. However if current trends are an indication, this is the exciting reality we can reasonably expect to live in the near future. One where a broad, distributed network of micro ESSs are integrated into the fabric of modern, 21st century households to ultimately provide cheaper, more reliable and more sustainable power.

 

Brandon Ng futureproof energy storageBrandon Ng 
Co-founder, CEO at Ampd Energy
(rebranded from QFE)

Brandon’s profile on Linkedin

The Disruption Of Genomics – Guest Post by Dr Brad Worrall and Stephen R Williams

A decade ago, sequencing the over 3 billion bases of the human genome, which gave birth to the field of modern “Genomics”, was a disruptive force in science and technology that changed the way we think about disease forever. The development of this technology has led to the sequencing of 100+ eukaryotic and prokaryotic species including not only homo sapien and close relatives such as the Neanderthal but also e.coli, HIV, mouse, chimp, and a host of other organisms. Today, we are so efficient at generating these data that sequencing a human genome in a couple of days is the norm and digital interpretations of these genomes have been performed tens of thousands of times over. While generating human genomic sequences, and to a lesser extent analyzing the digital readout of these genomes, has become commonplace, the impact of this information on society on the whole is just starting to be felt. The impact on society and specifically the field of healthcare is a building wave that will soon take shape as a disruptive force in clinical education, day-to-day practice, and the financial infrastructure behind our healthcare system.

Individual sequencing

For an individual to have his or her genome sequenced today the cost is ~$1000 USD with the cost per “base” falling every day as this technology becomes more and more ubiquitous. This technology has the potential to benefit millions, if not billions, of people worldwide but as the cost drops individuals will undoubtedly come to the clinic at a higher and higher rate having their genome sequenced prior to the recommendation of any healthcare worker which will change the current paradigm of “have disease/syndrome get sequenced” to “already sequenced, what can this tell me about my disease risk?” The question is, ‘how will this disrupt the current way medicine is practiced and markets are branded?’

The academic conundrum

Currently, the average amount of hours that a typical medical school student spends training in genomics is limited, excluding those specializing as clinical geneticists. However, the field of genomics is so rapidly expanding that it is almost impossible for curricula to keep up despite the impact that this information will have on day-to-day jobs in medicine. The need for an uptick in education of clinical partners in healthcare has become ever important. These would include clinically certified geneticists, genetic counselors, medical geneticists, and bioinformaticians. The latter is an extremely under filled position that would benefit any and all healthcare institution in the years and decades to come as genomic information available in the clinic becomes more common place. However, are the decision-makers, both public and private, willing to make an investment in the short-term to prepare for the wave of genomic information that will inevitably hit the clinic? Also, do we as a community have the motivation to retroactively train individuals who are already in practice?

The wider impact of genomics

In chronic disease treatment there is an established model of the ‘case manager’. Initially a nurse reaches out to help a patient that has chronic disease. Next, the patient is assisted through the process of checkups, screenings, and treatment until the disease is not present or no longer manageable. The case manager leads this process. As genomics becomes more and more disruptive would our healthcare system benefit from a genetic information manager? How will individuals process this information outside of contact with a genetic counselor (someone who is dedicated to this) who is typically called upon by referral? Should this information be processed in a family-wise fashion? One’s genomic information affects everyone in their family tree and these dynamics can be complicated and outside the training of today’s genetic and genomic specialists. Further, who pays for this “pre-disease” counseling which could be both emotional and biological?

Dropping costs

As costs drop and individuals seek out this information on their own marketing, branding, and drug-genome interaction awareness becomes important. We already know that pharma-genomic information is profoundly important. For example Azathiaprine, given to an individuals with a specific genetic variant, is associated with a lethal side effect. In fact it is considered malpractice to not check the status of enzyme function by genetic testing of TMPT. The HIV drug Abacavir works well except for the fact that it will kill some individuals carrying a specific genetic variant. In unselected patients, 5-8% develop a potentially deadly hypersensitivity reaction within the first 6 weeks of antiretroviral therapy. Prospectively screening individuals for the risk variants (HL-B*5701 status) prior to starting therapy costs ~$17/person and avoids the far more expensive and deadly hypersensitivity reactions in more than 500 people for every 10,000 treated. Clinical practice and insurance coverage rapidly included this step. This has been hailed as model for adoption of a pharmacogenomics test.

Beneficial effects

These types of interactions will encourage pharma companies to develop a genetic test at the same time as the drug where the standard of care may be to test for the genetic variant along with prescription. Further genetic and genomics studies can actually help companies directly market to consumers as the individual becomes more intimate with their genome. This will cut down marketing to target by genotype, could potentially, avoid life threatening side effects and liability

Financials of Genomics

This leads us to the financial aspect of genomics that will certainly be a disruptive force in healthcare. Even though the costs of generating genomic information are at an all-time low, because of the relatively few diseases that can be specifically diagnosed from a genomic test insurance companies have not bought into the idea of covering the sequence of each individual genome as a good practice in preventive medicine. Thus, the out of pocket cost for the patient remains high. So, this leaves us with another question, ‘What is the cost/benefit of having healthy people receive their genomic information given the lack of understanding of the diseases that cause the most healthcare burden worldwide (ie vascular disease, cancer, Alzheimer’s, diabetes)?

The ethical issues

As with any technological innovation, the genomic revolution and implementation of genomics in clinical care has raised a slew of ethical issues related to genetic privacy (who can and cannot have access to genomic data – patients, families, spouses, employers), ownership (who controls what can and cannot be done with my genetic information – research, commercialization, public health), impact on family members (does the fact that my genetic information has relevance to my relatives give them any rights to know or not to know), and perhaps most acutely, the right not to know or to change my mind about knowing. On a very small scale, the story of the discovery of the gene implicated in Huntington disease has relevance. Prior to the genetic variant being known, ¾ of at risk individuals claimed a desire to know their genetic status, but once the test became available fewer than ¼ have chosen to get testing. The ability to decide whether to know or not may be more important to many than the actual knowing.

The legal / patent issue

Genomics provides ample opportunity for branding which we already see. Companies, hospitals, and clinics position themselves at the vanguard claiming cutting edge practice and innovation. A quick perusal of the New York Times Magazine demonstrates multiple healthcare systems are touting their use of genomics to target cancer treatment, tailor therapy, and identify risk. On the other hand, broadly available genome-wide data has substantial implications for companies that patented genomic information (e.g. Myriad Genomics for BRCA1), a controversial practice. Both the United States Supreme Court in 2013 and the Australian High Court in 2015 ruled that naturally occurring DNA sequences are ineligible for patents. In the European Union, some genomic sequences can still be patented, but with specific criteria. Nearly all Latin American countries have banned patents on genetic sequences. The situation in Asia is less clear. Broad availability of whole genome data will undoubtedly challenge the tenability of at least diagnostic genomic patents on a practical level.

The future prospects…

What are the avenues that excite us? The science driving this disruptive force continues to evolve and change. We have gone from individual genetic tests, the genome-wide association studies, to exome sequencing (cheaper but incomplete way to get genetic information), whole genome sequencing, and now a raft of other -omics (epigenomics, proteomics, metabolomics, and metagenomics) will be added to the mix and interact with genetic information creating an exponential growth of information. Under the current standard for clinical investigation, physicians and other practitioners tend to take a stepwise process going from more focused and targeted testing to broader methods. Broad scale availability of whole genome sequencing at an attainable price, will upend this process and may in fact eliminate other intermediary technologies.

 

Brad Worrall genomics FutureproofBradford B. Worrall, MD, MSc
Harrison Distinguished Teaching Professor and
Vice-Chair for Clinical Research of Neurology
and Professor of Public Health Sciences
University of Virginia

Stephen R Williams futureproofStephen R. Williams, PhD
Assistant Research Professor
Department of Neurology
University of Virginia​

 

How Disruptive Is Climate Change? Guest post by Giles Gibbons

Let’s start with a definition of Climate Change: for me, I describe it as the reduction of resources used by a company in order to create a sustainable business. As such, I think that Climate Change is not a disruptive force, at least not as a standalone item. It is only when it’s linked to other forces that the disruption occurs. For instance, Uber is a tech disruptor first that comes with a massive impact on climate change. Over the years, we have seen that attitudes with regard to climate change have been very slow in evolving. Recycling has grown in a fairly uniform way over the past 15-20 years. Yes, we see climate change entering into the psyche and business plans, but it is not a disruptive manner. If there is disruption, it comes because Silicone Valley companies have disrupted an industry and, as a by-product, there has been an impact on climate change. If climate change is about using ever less resources in a more astute and efficient manner, it is an ever-present challenge for businesses. We are not just referring to saving environmental resources; but, we are looking at reducing water bills, electricity costs, etc. It’s an evolutionary and ongoing business challenge.

When we look at the Silicone Valley innovations that have made such waves in various industries, the business models are being disrupted and, in many cases, we have been receiving a major windfall for climate change. The point is that, while many of these initiatives are climate change positive, very few start out with Climate Change as the disrupting fuse. [my term!] Maybe Elon Musk’s Tesla is the best counter example. However, for the most part, the corollary is that there is a lot more money floating around for investments in social impact.

On the side of CSR, employees and clients are both aware. However, it is rarely critical to the decision-making. CSR is a component, but an ever reducing part, in our experience. At Nike, for example, it is the CFO who has taken charge of CSR, where it is all about driving a sustainable business. The equation leads with resource and cost reduction. Afterwards, such actions contribute to a positive image for the company. I like to say that CSR provides a broad hue from the consumers’ perspective.

To the extent I am positive about the human being’s power to solve problems, I think that we will come up with solutions for the challenges of climate change and finite natural resources. For example, we see that solar power energy will become two times more powerful within the next 24 months. With the same timeframe, energy storage will become 10x more effective. These advancements will of course bring with them other challenges, but from a business perspective the need to reduce energy costs will remain similarly pertinent over the long term.

Bottom line, we are already on a positive journey with regard to climate change. There will be iconic moments that may alter and shape the narrative, but the need to adapt to climate change is old news. Any potential disruption will occur first and foremost for businesses that are directly in energy. Secondly, there are complementary businesses — such as transportation — that rely heavily on the consumption of these energies.

Giles Gibbons futureproof climate changeGiles Gibbons, CEO and Founder of Good Business, author of “Good Business: Your World Needs You” (on Amazon)

The Technology Based Restaurant – Guest post by Giles Morgan

The low hum was faint at first, slowly getting louder the more you concentrated on the sound, until eventually the black object broke over the horizon at first looking like a Blackhawk helicopter from a Hollywood blockbuster movie, then as it moved in a regimented fashion up and down the vines, its identity was revealed as a drone. Creating wine used to be about experience, gut-instinct and soil to name a few things guaranteeing a successful crop. Now its sensors, data, software and of course drones. Fitted with multispectral and visual sensors, these Drones collect multitudes of data determining the health of the vineyard (e.g. Crop Vigor) whilst on the ground, other sensors monitor temperature and soil. When the process gets to the bottling stage, an NFC (near-field communication) label is produced and placed on the bottle to be read throughout the supply chain from producer to logistics through customs and then to wholesale and finally the restaurant.

The restauranteur checks her cloud-based online platform and sees a visual representation of her food and wine stock supply chain. Three days previous, she had ordered a resupply of exotic ingredients for her head chef and can see where those supplies are in real-time on her interactive map. A warning notification pops up on her SiteSage Energy & Asset Management system alerting her that the temperature in one of the freezers in the restaurant kitchen is decreasing and that she has three hours to move supplies to another unit. Given that Restaurants use up to three times more energy than traditional commercial enterprises, the restauranteur is pleased to see that she has saved 10% energy in the last quarter. Sensors in the fridges allow her to see that there is enough spare capacity in the remaining fridges and organises for the food to be moved.

The restaurant manager arrives and fires up his Microsoft MICROS mTablet. He views analytical data on the previous day’s trading through the EPOS system. He then checks out the real-time view of the restaurant bookings for the day and sees several tables available during the first cover, at the click of a button, he pushes a marketing campaign into the Open Table mobile application to drive bookings. He analyses the evening reservations and wait lists, as well as the sales and inventory data. He uses the insights from this data to automatically update his staff coming in for the lunchtime session, a notification is also autonomously pushed to a group of temporary waitresses via the restaurants mobile employee app offering a shift this evening. The manager receives an almost instant notification from Sarah (a temporary waitress registered on the employee app) and the shift is confirmed back to her.

All the servers in the restaurant are equipped with mobile devices that can send orders directly to the kitchen. The craft beer and wine bottles (including the wine bottles from the vineyard) arrive and are lined up in the bar. The restaurant uses SteadyServ iKeg for managing their array of beers. Sensors attached to each keg tracks the type and style of beer, when it was delivered to the restaurant, when it was opened and of course when it will run dry. Spirits in the restaurant use smart spouts from BarVision that helps provide data insight from each pour. Everything is precisely monitored and integrated with EPOS (Electronic Point of Sales) systems.

On arrival from the logistics courier, each box is scanned and the data is uploaded to the cloud. Instant personalised emails are sent to relevant customers booked in for today offering them the chance to pre-order their favourite beer or wine before arriving, the drink list is automatically produced for the bar tender so that a seamless customer experience is created. A further email will be automatically sent to the customer after their evening offering them the opportunity to purchase a case of the wine or craft beer they enjoyed.

The doors open for lunch; every day when the doors open a bot fires a tweet to all the restaurants followers on Twitter with a link to today’s menu and a few special offers targeted at filling unsold tables over the next 7 days.

Customers start to arrive and as usual, people change their mind around which tables to sit at, the waitress checks her mobile tablet and with the swipe of a finger moves the customer to another table automatically rearranging the tables which don’t already have customers.

The couple sit down and use their mobile phones and apps such as Secret DJ to request their favourite music during their meal, requests are queued on the restaurants sound system and played in order, falling back on a playlist should there be a lack of requests.

Their drinks arrive immediately as they have already pre-ordered. The waitress hands the customers their mini slim line tablets which act as their menu’s. With every order, the menu is automatically updated removing customer disappointment if a particular item has sold out, in fact they won’t even know as it is silently removed from the screen.

According to Gartner, 6.4 Billion connected devices will be in use in 2016 with a staggering 5.5 million devices connecting every day and this will reach 20.8 billion devices by 2020. The Internet of Things (IoT) will support total services spending of $235 billion in 2016. Finally, by 2020 Gartner believes more than half of Major New Business Processes and Systems will incorporate some element of the Internet of Things.

Exciting times lie ahead with the use of IoT in many industries, that’s for sure. However, not without its dangers especially around security, and with so many devices connected comes the opportunity for hackers to use these devices to launch DDoS attacks which we have already seen in 2016. In addition to this, it’s important to remember the importance of having a robust management platform to monitor and support these large scale connected networks.

As a technologist and innovator, I’m excited by this revolution, if it also removes the disappointment of a corked bottle of wine or helps me discover new foods then even better. Chin! Chin!

 

Giles Morgan futureproofGiles Morgan,
Global Digital Leader |
Global TAS at EY ‘Misfit & Innovator’ M&A

​Diversity & Inclusion – Guest post by Michael Stuber

What should be seen as a business case and common sense turns out to be a long-lasting challenge for people and organisations

While differences have always existed in societies and certainly in business organisations, the phenomenon of diversity has become a disruptive force over the past 25 years. The end of the East-West-Divide, in combination with the emergence of the Internet, initiated not only the Third Industrial Revolution, but also a fundamental paradigm shift in the way many people live and work (together), at least in the Western world. Changes include an unprecedented growth in individuality (and hence diversity), a strong preference for multi-cultural environments (including the workplace), and multiple new ways of collaboration and communication. To that end, all levels of human cognition have been impacted, which provides huge opportunities for the business world but also challenges.

Reaping the disruptive value of Diversity

In order to realise benefits from diversity, the value-chain of Diversity & Inclusion needs to be managed carefully and ideally in a systematic way: Differences can only be turned into competitive advantage when openness prevails – individually and in the organisational culture – and inclusive processes, behaviour and communication are applied. The benefits of getting this value-creation process right have been proven by 205 robust studies portrayed in the International Business Case Report. Some studies highlight that in order for diversity to add value, a healthy conflict, e.g. through minority dissent, is required. This hint for existing challenges is only the tip of an iceberg, nowadays discussed under the headline of Unconscious Biases.

Hindering the productive disruption of Diversity

While the term ‘Unconscious Bias’ most often describes specific types of implicit associations, my analysis of existing research from the past decades suggests that it serves perfectly to describe six types of biases in three areas that have one thing in common: Making it hard for individuals, teams and organisations to tap into the potential of Diversity by consistently practicing Inclusion. The main categories of Unconscious Biases that are of immediate relevance to Diversity Management include personal / human preference for sameness, stereotypes about ‘others’, biased application of (theoretically) meritocratic processes, micro-inequities, unwritten rules in mono-cultures and the organisational preference that reproduces success types of the past. The dynamics can be observed on individual, process and organisational levels, and some biases stabilise each other in a way that makes mitigation a complex task.

Making Diversity & Inclusion work is complex

Over the past twenty years, a number of success formats dominated each of the different eras – each claiming to be the silver bullet everyone was looking for. In fact, the critical questions representing resistance against diversity, inclusion or both, have not changed much over the past decades. What’s in it for me? For the business? Why change in the first place? Is there any urgency at all? These and other common questions show quite clearly that a complex change strategy must be designed in order to nudge people and organisations towards overcoming initial and subsequent barriers, and gradually unleashing the power of differences. A combination of different change models has proven to be advisable: The generic trifold model of leadership, tools and cultural change serves as a backdrop against which more D&I specific approaches can be designed. The different types of Unconscious Biases provide another template for developing roadmaps. Multi-phase models for organisation development, such as Kotter’s 8 steps, make timing more effective. Finally, the value-creation model of D&I provides quality check points to know if your strategy will eventually lead to the desired benefits. One more thing still needs to be added to the complexity: Stakeholder management continues to be a challenge in many or most D&I processes. For the perceptions, personal convictions, needs and possibilities of different target groups and individuals within those target groups vary a lot.

Michael Stuber DiversityMichael Stuber,

Founder and Owner-Manager of European Diversity
VP of International Affairs, European Institute for Managing Diversity

 

The Power Of The Internet Of Things – Guest post by Jim Hunter (@theiotguru)

The term Internet of Things (or IoT) was coined in 1999.  Just what does this super generic moniker mean?  Literally, it means that physical devices are beginning to connect to the Internet.  If you think about that for a few seconds, technically that is what the Internet is. Physical devices connected together through a common network. The physical devices or “things” of the internet have been servers, routers, switches and all forms of connected compute devices. From that perspective, IoT, is the redefinition of what an internet thing is. More specifically, IoT is a redefinition of what a connected compute device is. The vast majority of theses new IoT connected computer devices will be sensors. Sensors that read, measure, collect and digitize the world around us.

Creating context

The reason sensors are so important is because they provide context. Today the most important devices to create context are those we carry with us. These include our mobile devices and in some cases wearable devices, that are loaded with sensors local to us.

Soon the most important devices will be those around us. In the near future, thousands of sensors will be fixtures in our environment that emit contextual data messages. These sensors will broadcast their contextual identifiers available that answer the questions of who, where, what, when and why to applications that are personal to you.  Sensors will measure and broadcast information about position, health, energy, radio strength climate, traffic, vibration, stress, noise, light … basically anything that can be measured and has value to mankind will be measured.  That measured information will be broadcast over a short distance to mobile devices that are within range.  For any given moment in time, a detailed digital picture of you and your surroundings can be captured for your private applications to consider.

IoT privacy issues

The personal and private nature of your data and identity require this design, as opposed to your device broadcasting its presence to other devices. This also means that creating informational infrastructure that works hand in hand with applications on personal devices is the future of the IoT, and a massive opportunity. This is not new. Essentially, this is how GPS location works. The GPS satellites broadcast small information messages that include their identity and the time. It is up to the personalized location devices to make those messages usable for a consumer. This design is also appearing in shopping situations, where stores broadcast location-specific RFID (Radio Frequency ID) to tell listening apps where products are in a store, and the application then converts those messages into product location information and related purchase deals for the user.

It is important to understand that there are a variety of ways to maintain security and privacy, even for the broadcasting RFID.  For example, a given ID could actually be an abstracted hash.  To make sense of an abstracted hash, the application would have to pass the hash to a decoder.  This decoder may be in a cloud or fog service.  The decoder may require authentication to decode the hash to the actual information pertaining to the RFID. This authenticated lookup also allows a given RFID, to decode into different information, depending on the authentication level of a given user.  For example, a teenager may get different information from an RFID than a parent or head of household may from the same RFID with the same application.  Most importantly, a given RFID may return completely different information, if decoded by completely different services.  For example, consider RFID 0088776655AB.  This RFID may decode to a value that results in displaying a fire hydrant for an emergency fire fighters Heads Up Display (HUD).  As well, the same RFID may decode to a no parking zone in a traffic application.  It may also decode as an obstruction warning for a person who is sight challenged.  Different apps can process the same IoT surroundings differently, which will enable a massive new wave of value add applications.

The above example, is just one of many of the value propositions of IoT.  With IoT, we are giving a voice to an unprecedented number of things that can measure every aspect of our world.  These things will provide context like we have never known before.  They will answer the questions of who, where, what, when and why.

jim hunter futureproof IOTJim Hunter,
Chief Scientist & Technology Evangelist at Greenwave Systems Inc.,
@theiotguru on Twitter

 

 

The Burgeoning Landscape of Localization Tools and Smartphones – Guest post by Anne Bezançon

For the first time in human history, every movement in the physical world can be identified, recorded and analyzed through our mobile phones, smartwatches, fitness trackers and soon smart-tattoos and implants. This fundamentally changes our relationship to ourselves and others because “digital” and “physical” are now merging, providing a completely new source of data and analysis, and ultimately a higher-fidelity representation of who we are.

The expected impact of localization tools

The impact of these capabilities goes way beyond their initial application to maps and navigation or even geofence marketing. We are exploring use-cases across all verticals: advertising (from attribution of campaigns to segmentation of audiences based on their behavior in the physical world), retail (consumer patterns, routes, frequency, times), healthcare (from tracking an infectious disease across the country to reminding folks of the proximity of a pharmacy to pick up their prescription), sports and entertainment (from performance tracking apps to geofencing communications within a venue for a few hours), news (local citizen journalism), financial services (fraud prevention by matching transaction to place), transportation and logistics/delivery (from Uber to Fedex, route optimization, user feedback in real-time), field service operations (team management for utilities), and, of course, public safety and government (managing refugee flows is a timely concern). From collecting anonymous data from millions of users and extracting statistical models of behavior to addressing one individual’s specific needs based on interactive systems, localization tools are going to change every sector of activity.

From a company’s perspective, what needs to be done to take advantage of it?

First, think about what you could know through these tools that you don’t today. Localization tools provide data that was not available before.

Second, research and identify the right partner who has the expertise and technology to help you.

Third, start doing something, iterate and learn. There are major opportunities for competitive edge with the right combination of one and two above.

What are some of the risks and opportunities?

The biggest risk lies in the need to define new boundaries and protection mechanisms for privacy, as regulators are still lagging behind with the very rapid evolution of technology, particularly in mobile, internet of things, etc.

The opportunities are many from a business standpoint, since more “picks and shovels” need to be built, either specialized in certain “vertical” problems and solutions, or specialized in back-end computation of increasingly large amounts of data, or yet in front end data visualization for both consumers and business decision makers.

——-

Anne_Bezancon_Placecast FutureproofAnne Bezançon is the founder and President of Placecast, the leading enterprise platform for monetizing mobile location and user data at scale. The company specializes in providing proven, secure, privacy-first solutions for big data monetization to the largest Telecom (AT&T, Rogers, Telefonica), Financial Institutions and Media companies in the world. Over 500 brands have used the Placecast platform, including Starbucks, Subway, HP, JetBlue, McDonald’s, and Pizza Hut.

A native of France, Anne discovered her passion for technology when she helped develop the Minitel, a precursor to the Internet. Anne moved to the Silicon Valley in 1996. She has since started three companies and participated in the launch of two more. In 1995, she organized the NGO Forum of the United Nations Conference on Women in Beijing, and pioneered private sponsorships from Apple and HP to enable training in word processing and email for 40,000 participants.

Anne was invited to meet with the French President during his official visit to San Francisco in March 2014. Anne was also named to the 2013 “Mobile Women to Watch” list from Mobile Marketer. In 2011, Anne attended the eG8 Summit, an invitation-only summit of leaders in government and industry focusing on the Internet in the context of global public policy. She writes thought leadership pieces for leading tech and business publications, including Forbes. Anne holds a diploma from Sciences-Po Paris, and an LLM in Business Law. She is the author of several patents in the field of location-based technology, and speaks frequently at various tech industry and business events.

The Car Of Tomorrow Inches Closer – Guest Post by Arthur Goldstuck

If one is looking for a barometer of the evolution of the motor vehicle, one of the best places to find it is in the rising pressure on car makers to display automotive technology at consumer tech shows.

While the likes of the Detroit, Geneva and Tokyo Motor Shows still dominate launches, unveilings and announcements, the technology breakthroughs are slowly moving across to the likes of the Consumer Electronics Show (CES) in Las Vegas and the Mobile World Congress (MWC) in Barcelona.

Marque by marque, the car makers are trying to find a place among the gadgets and smart devices. Twelve major manufacturers now make their way to CES as a matter of course. And one manufacturer did the unthinkable in 2016: launched a new vehicle at MWC.

The biggest news announced at CES 2016 was not of a technology but of a piece of paper. To be precise, a driving licence. But this was no ordinary licence.

The US state of Nevada, home of CES, awarded the world’s first test licence for autonomous driving of a standard production car. The real big news was that the beneficiary was not a futuristic concept car of the kind launched by Nissan at the Tokyo Motor Show in October or by Mercedes-Benz at last year’s CES.

Instead, it was the new Mercedes E-Class, with three standard production 2017 models given approval to drive themselves by the Nevada Department of Motor Vehicles (NDMV).

It is difficult to overstate the significance of this news. It means that the autonomous vehicle is no longer an experimental toy built by Google and operated by geeks. It means that vehicle software creators like Microsoft and BlackBerry no longer have to persuade manufacturers that this is their future. It means the likes of Ford, Audi and Volvo will also move on from elaborate adaptation of existing vehicles, modified steering and retro-fitting sensors.

As a Daimler statement put it, “The standard-production vehicle is already extensively equipped with intelligent technology. This means that, for testing purposes, it is necessary merely to make some smaller software modifications to the DRIVE PILOT control unit.”

But the robots have not taken over. Yet. For now, the self-driving cars will still need a trained test driver. While the tests are allowed on all highways in Nevada, human drivers have to take over for turning, merging and departing. The NDMV rules also require one passenger behind the wheel and a second passenger in the vehicle on test drives.

Meanwhile, the unveiling of a new vehicle at the MWC 2016 represented a seismic event in the history of both vehicle manufacture and consumer technology.

At the world’s largest expo devoted to mobile technology, the unveiling of the new Ford Kuga SUV marked the final crossing over of car technology into the preserve of consumer devices. While manufacturers have been showing off in-vehicle advances and self-driving possibilities at CES for around seven years, the event always played second fiddle in this arena to the Detroit Motor Show. Detroit would see the unveiling of the latest cars and tech, while Las Vegas would act as a showcase. Around six weeks later, MWC in Barcelona would find itself coughing in the exhaust smoke of the media bandwagon that had been and gone.

No more. The presence of the new Kuga at MWC was almost as significant as the keynote address by Mark Fields, CEO and president of Ford Motor Company, who declared the manufacturer’s repositioning as both an automotive and mobile business.

This repositioning is at the heart of the choice of tech show rather than a car show to launch a new vehicle. However, the Kuga also offers the most convincing evidence yet of the mainstream potential of the connected car.

It debuts the latest version of Ford’s on-board information and entertainment system, SYNC 3, which uses conversational voice commands to control audio, navigation, and climate functions. It also integrates seamlessly with most smartphones by supporting Apple CarPlay for iPhones and Android Auto for Android devices. It means that any app available on the handsets can appear on the vehicle’s 8-inch touchscreen display and be controlled from there.

This represents a dramatic breakthrough, particularly in the world of in-vehicle navigation, which traditionally confined drivers to the mapping systems that came with the cars – and were thus already obsolete when they rolled off the production lines. SYNC 3 allows the latest version of Google Maps, Apple Maps or HERE Maps, for example, to be called up from the phone.

The significance of this is that vehicles can, for the first time, take full advantage of the rapid evolution of mobile technology, apps and utilities. In the next few years, the technology will be rolled out to all new Ford vehicles, meaning that cars aimed at the mass market will enjoy the same information system advances as high-end vehicles.

The Kuga introduces new driver assistance technologies as well, serving as a precursor to autonomous or self-driving vehicles. The existing semi-autonomous Active Park Assist technology is joined by Perpendicular Parking functionality, which uses ultrasonic sensors to locate parking spaces and steer the vehicle into them. The driver still controls the accelerator and brake, but the hard work is taken over by the car.

Coming out of parking spaces also becomes safer through additional sensors. The Cross Traffic Alert uses a radar with a 40-metre range to warn drivers of vehicles approaching from either side.

The sensors, which are for now the key to making vehicles safer, will be at the heart of the future self-driving car. That means that the Kuga is not only a taste of the future, but also a proof-of-concept that will hasten the arrival of tomorrow’s car.

When it does arrive, as we are seeing in Nevada, it will also signal a new era of licensing authorities having to reeducate themselves on the capabilities and limitations of vehicles. But don’t expect that to happen overnight.

 

Arthur_Goldstuck futureproofArthur Goldstuck is founder of World Wide Worx and editor-in-chief of Gadget.co.za. Follow him on Twitter on @art2gee, and subscribe to his YouTube channel

 

 

 

The Cryptocurrency Disruption – Guest Post by Michael Carrier

To understand the level of disruption that could issue forth from crypto-currencies like Bitcoin, it is first necessary to understand the magnitude of the invention that they represent.

Understanding the innovation

Crypto-currencies aren’t merely the ability to transfer value from one person to another. Our current ways of doing this are many, low cost, and relatively easy to construct. These systems however rely on one critical factor, and this is the need to have a trusted 3rd party, conduct, verify, insure and facilitate the transfer. This has been how it has been done in many forms across all societies for over a millennium.

Traditional value transfer, or transaction communication systems like these work on the centralized hub and spoke architecture. They are often banks, or networks, or transaction facilitators like Visa and the like.

This type of “centralized” architecture has worked well for governments because it allows complete control over the money and communication systems. Governments can license and easily regulate the key players in this type of arrangement. All transactions, and deposits are tracked, their owners known, reported on and it is a system within which taxes can be easily compelled. Economic players that do not maintain political/governmental favor, will lose their license to participate in the centrally controlled economic system.

Only items of value like gold, physical cash, precious stones, or art works stand outside of the complete control by the modern government money system. These items allow for “de-centralized” value transfers and thus from a government perspective, less controllable system. These items represent the possibility for individuals to exchange value outside a transaction system overseen by the government. These items are “bearer” by nature, which means the holder is effectively the owner, and their mere existence is a threat to the current construction of the government/money system. But, because these items are heavy, hard to conceal or transport in large quantities, and also because governments are very keen to diminish or abolish their use as stores of value, or mediums of exchange, these physical items are very limited in the modern world of finance and commerce. And, in addition, centralized electronic ledgers are lower cost, faster and a more effective method for value transfers. But, what if we could have both? What if it were possible to have both the low cost and high speed of an electronic money system and the “de-centralized” money equivalent to a gold money system? A de-centralized electronic money system is what Bitcoin and crypto currencies represent. In short a massive disruption in the world for how we do money, and its implications will be massive.

The Impact of Cryptocurrencies

The emergence of crypto-currencies changes everything. Crypto-currencies are electronic, easy to conceal, transmit and impossible to control in a physical world. They enable an electronic “bearer” exchange of value, that is as easy to send as email, and effectively impossible to control. Billions of dollars worth of value can be held in the obscure pass phrase simply memorized by the holder, gone are the days of having to sew diamonds into the hems of your clothing to cross borders with items of value.

A money system which operates primarily through barter (individuals exchanging items), or through exchanging “bearer” certificates of value would put the burden of reporting (for tax purposes) for regulatory compliance, more on the individuals directly involved in the transaction. It would rely on the individuals to report or enforce transaction compliance, and this would vastly diminish all governments’ current abilities to compel legal decisions, payments, collect taxes or enforcement of regulation.

To understand what kind of world this will bring we have only to look to the past. Economies of the past were more of this kind of money system, as gold and silver or just cash were the primary methods of exchange and stores of value, not the centralized bank ledgers that we have today. As a consequence of this, individual power or personal sovereignty was much enlarged when compared to the powers of the state. There was more individual liberty for the people in relation to the state. The political implications of the emergence of this new type of crypto-currency money system are hard to overstate.

Governments in the past were in less of a position to compel compliance financially, and therefore this “bearer” type of money system required a more voluntary commitment of the citizenry towards paying taxes or governments left with collecting revenues at payment points that they could control. These governments of the past were forced to play a much smaller role in the economy than they play today. The vast bulk of economic transactions back then were opaque to governments, and beyond the reach of their governmental control, to tax or impose regulation (regulations which for the most part simply translate political power into unfair economic advantage for those who can curry favor from those at the top of the political system). A centralized money system is the primary method of control for modern governments over people, corporations and even foreign states.

Crypto-currencies represent the historical pendulum swinging back toward the more de-centralized money systems of the past, systems where governments have less centralized control over the economy and economic players within it. The emergence of crypto transactions mean we have seen the zenith of centralized governmental money systems. This will be a big disruption for the governments and corporations of the world.

The crypto-currency wave is more correctly understood really as the cryptography wave. Recent advancements in the science of cryptography are the underpinnings for the emergence of crypto-currency but they also empower new forms of better encrypted personal communications. These advancements are bringing back the opportunity for actual individual privacy and expanding the scope of individual sovereignty relative to that of the state. These advancements allow for secure private communications between people, as well as enabling crypto currencies which allow for the free exchange of value unmediated by the state, and uncontrolled by its politically protected corporations. For statists this is a potential disaster, but for Libertarians it is a potential renaissance.

The Internet Is the Largest Economy

The largest economy in the world is not the U.S. or China, it is the Internet, and the Internet has for the first time, just invented its own money. This change is potentially more significant than the invention of the Internet itself.

Despite their potency and I would say inevitability, right now the crypto-currency wave has slowed down and has been stymied to a major degree for lack of one critical thing, jurisdiction.

No major government in the West or East has seen fit to provide legal jurisdiction, legal contract and definition for crypto-currency. These governments are all smart enough to understand it isn’t really in their interest. Their support for crypto now would be all downside, from the almost perfect control they enjoy today over their money and banking systems. In the meantime, crypto-currencies have been forced underground, into the darker parts of the world economy both on and offline. Yet despite this repression, crypto transaction volumes continue to grow, and the technology and it users continue to slowly expand.

Cryptocurrency – The Genie Is Out

The technological advancement that crypto-currencies represent is a genie, which will not go back into its bottle. Eventually governments no matter how hard they try will be unable to prevent their rise and some aggressive governments will see the opportunity in taking advantage of this new system by granting crypto’s protection within their jurisdiction. The first large governments that grant jurisdiction will reap big rewards over other governments that lag behind. This same first mover advantage will be true for corporations as well, those that adopt this new paradigm shift first, will stand to gain at the expense of slower players in their markets.

Crypto-currencies will slowly change the money world. And I would call that disruptive!

michael-carrier futureproof bitcoin cryptocurrencies blockchainMichael Carrier – Bay Area Crypto Entrepreneur

You can find Michael on Twitter @michaelpcarrier

 

 

3D Printing Is A Revolution. But Probably Not The Revolution You Were Thinking Of

Guest post by Duncan Stewart (@dunstewart)

3D printing, where machines take computer files and make objects out of metal or plastic, building them up one layer at a time, has been around since the 1980s. Also known as additive manufacturing, 3D printing was called part of the third industrial revolution[see footnote 1], and largely entered popular consciousness with the availability of consumer 3D printers for under $1,000: in 2010, pundits proclaimed “a factory in every home.[2]”

Personally, I blame the TV show Star Trek: The Next Generation and that damned Replicator[3]! The idea that you could have a machine make – to order – almost any food, drink, or spare part captured the popular imagination. So much so that a 3D printer company actually named one of its products after the science fictional technology[4].

But 3D printers in 2016 fail to match the 24th century technology. Home 3D printers are difficult to use, jam, break down, and require multiple efforts to make the desired object. Not only are the machines expensive, but the feedstock plastic costs so much that 3D printing almost any part will cost much more than just buying it in a store…and the process is slow: desktop 3D printers take hours to make parts only 5 cm or two inches high[5]. All of those challenges might be tolerable, except that the only things that home 3D printers can make are small plastic items that aren’t even as useful or attractive as the toys you get inside a kids meal from a fast food restaurant!

That isn’t just the state of the art for home 3D printers in 2016…this will continue to be true for the next decade or more. There are 3D printers that work in metal or other materials that are more useful for the average consumer, but they will not be coming to our homes any time soon, for reasons of cost, size, complexity, not to mention the toxic fumes.

Home 3D printers are an interesting tool for individuals to learn about additive manufacturing. But the idea that they will get rid of the shopping mall or delivery of consumer goods in the near future is deeply misleading. In fact, most consumer 3D printers are likely experimented with for a while, and then consigned to the garage or basement, along with the bread-maker and that pasta machine that never gets used. As further confirmation of this trend, one of the leading manufacturers of a sub-$1,000 consumer device announced they were exiting the consumer business entirely at the end of 2015[6].

But enterprise 3D printing technology is misunderstood too. As part of my research, I discovered that virtually all of the businesses that were using additive manufacturing were still using it for rapid prototyping or the manufacture of intermediate parts like tools, molds, or casts: final part manufacturing was less than 10% of all usage[7].

Prototyping may lead to faster and better product development, but it isn’t as sexy as the idea that 3D printing will allow for customized local manufacturing that would disrupt existing supply chains: no more need for offshore manufacturing at scale, transport, logistics, warehouses, and so on[8].

That change may happen one day, but it likely won’t be soon. According to one manufacturer who tried producing complex metal parts using 3D printers, it is technically possible, but the technology took longer than traditional methods, cost more, required extensive and expensive post-manufacture processing time…and the final parts were still not good enough from a materials perspective. The company no longer uses 3D printing to make those parts, and the CEO (who is actually a 3D printing enthusiast) said “the technology has a lot of potential…but needs another 10 years.[9]”

Are 3D printers EVER used for final part manufacturing? Very much so! Hearing aid shells and dental copings are almost exclusively made with 3D printers today[10], as are most braces for teeth[11], some orthotics for shoes[12], and even some titanium hips[13]. In 2015, the FAA approved the first aerospace part made using additive manufacturing[14], and they are seen as potentially disruptive in use cases where there is no convenient parts delivery service: naval ships at sea[15], or even the International Space Station[16].

But we need to put these applications in context. It is interesting that the FAA has approved a 3D printed sensor housing – but given that each and every Boeing 787 has 2.3 million parts[17], we can see that additive manufacturing is likely to capture only a very small share of the parts market. Almost all of the time, traditional manufacturing techniques are faster and cheaper for volume manufacturing.

3D printers have become dominant in the hearing aid and dental coping markets, but that doesn’t mean that their adoption will be equally rapid in more complex medical applications. According to one leading 3D printing company, the probable timeline is roughly as follows: “printing cartilage in three to five years, because it is both avascular and aneural, followed by the ability to print bone in roughly five to ten years. The ability to print nerves and thus allow for the creation of organs is likely 10-20 years into the future.[18]”

To close on a more upbeat note, we don’t have to wait a decade or two for 3D printing to save lives. The most advanced hospitals are already using 3D printers to build medical simulators with unprecedented levels of realism: doctors and nurses are training on these dummies, and keeping their skills as sharp as their scalpels.[19] Soon, the hope is that “a surgeon preparing to operate, say, on a brain tumor will be able to 3-D print the child’s cancer from a CT scan, and then insert it into the trainer for a run-through.[20]”

Innovation is funny. Around the world, people got excited that 3D printing might be able to allow consumers to print out their own (plastic) cutlery, or that 3D printed parts could get rid of a warehouse or two. Neither will happen soon. So 3D printing is now being seen as a failure, even though real world applications in medical training are actually saving the lives of sick children.

duncan-stewart futureproof deloitteDuncan Stewart, Director of Research at Deloitte Canada.

You can find Duncan on Twitter @dunstewart

 

 

 

 

 

 

—REFERENCES

[1] http://www.thethirdindustrialrevolution.com/

[2] http://www.shareable.net/blog/a-factory-in-every-home

[3] https://en.wikipedia.org/wiki/Replicator_(Star_Trek)

[4] http://store.makerbot.com/replicator

[5] http://www.sculpteo.com/blog/2015/09/21/your-resource-for-3d-printing-build-time-and-post-processing-speeds/

[6] http://www.3dsystems.com/press-releases/3d-systems-end-life-cube-its-entry-level-consumer-3d-printer

[7] http://www2.deloitte.com/content/dam/Deloitte/global/Documents/Technology-Media-Telecommunications/gx-tmt-pred15-3d-printing-revolution.pdf

[8] http://www.supplychain247.com/article/3d_printing_and_the_supply_chains_of_the_future

[9] Confidential 2015 interview with a European manufacturer of industrial heating components.

[10] http://www.forbes.com/sites/stevebanker/2013/10/15/3d-printing-revolutionizes-the-hearing-aid-business/

[11] http://investor.aligntech.com/alignar_final_7-8-14/align-advantage.html

[12] https://www.sols.com/

[13] http://shanghaiist.com/2015/09/02/3d-printed_hip_implant_approved_for.php

[14] http://www.gereports.com/post/116402870270/the-faa-cleared-the-first-3d-printed-part-to-fly/

[15] http://www.pcworld.com/article/2954732/the-us-navy-is-3dprinting-custom-drones-on-its-ships.html

[16] https://www.nasa.gov/mission_pages/station/research/news/3Dratchet_wrench

[17] http://787updates.newairplane.com/787-Suppliers/World-Class-Supplier-Quality

[18] MedTech Bus Tour Recap: Revolutionary Breakthroughs on the Horizon, No Change in Current Industry Fundamentals, published 17 December, 2015, Stifel Equity Research Group (log in required) http://www.stifel.com/research

[19] http://www.nytimes.com/2015/11/10/health/heart-surgery-simulation-medical-training.html?_r=0

[20] Ibid. NY Times