HIV and syphilis biomarkers: Smartphone, finger prick, 15 minute diagnosis

Columbia University School of Engineering and Applied Science
Medical researchers have developed a low-cost smartphone accessory that can perform a point-of-care test that simultaneously detects three infectious disease markers — HIV and syphilis — from a finger prick of blood in just 15 minutes. The device replicates, for the first time, all mechanical, optical, and electronic functions of a lab-based blood test without requiring any stored energy: all necessary power is drawn from the smartphone.

Smartphone dongles performed a point-of-care HIV and syphilis test in Rwanda from finger prick whole blood in 15 minutes, operated by health care workers trained on a software app.

A team of researchers, led by Samuel K. Sia, associate professor of biomedical engineering at Columbia Engineering, has developed a low-cost smartphone accessory that can perform a point-of-care test that simultaneously detects three infectious disease markers from a finger prick of blood in just 15 minutes. The device replicates, for the first time, all mechanical, optical, and electronic functions of a lab-based blood test. Specifically, it performs an enzyme-linked immunosorbent assay (ELISA) without requiring any stored energy: all necessary power is drawn from the smartphone. It performs a triplexed immunoassay not currently available in a single test format: HIV antibody, treponemal-specific antibody for syphilis, and non-treponemal antibody for active syphilis infection.

Sia’s innovative accessory or dongle, a small device that easily connects to a smartphone or computer, was recently piloted by health care workers in Rwanda who tested whole blood obtained via a finger prick from 96 patients who were enrolling into prevention-of-mother-to-child-transmission clinics or voluntary counseling and testing centers. The work is published February 4 in Science Translational Medicine. Sia collaborated with researchers from Columbia’s Mailman School of Public Health; the Institute of HIV Disease Prevention and Control, Rwanda Biomedical Center; Department of Pathology and Cell Biology, Columbia University Medical Center; Centers for Disease Control and Prevention — Laboratory Reference and Research Branch, Atlanta; and OPKO Diagnostics.

“Our work shows that a full laboratory-quality immunoassay can be run on a smartphone accessory,” says Sia. “Coupling microfluidics with recent advances in consumer electronics can make certain lab-based diagnostics accessible to almost any population with access to smartphones. This kind of capability can transform how health care services are delivered around the world.”

Sia’s team wanted to build upon their previous work in miniaturizing diagnostics hardware for rapid point-of-care diagnosis of HIV, syphilis, and other sexually transmitted diseases. “We know that early diagnosis and treatment in pregnant mothers can greatly reduce adverse consequences to both mothers and their babies,” Sia notes.

They developed the dongle to be small and light enough to fit into one hand, and to run assays on disposable plastic cassettes with pre-loaded reagents, where disease-specific zones provided an objective read-out, much like an ELISA assay. Sia estimates the dongle will have a manufacturing cost of $34, much lower than the $18,450 that typical ELISA equipment runs.

The team made two main innovations to the dongle to achieve low power consumption, a must in places that do not always have electricity 24/7. They eliminated the power-consuming electrical pump by using a “one-push vacuum,” where a user mechanically activates a negative-pressure chamber to move a sequence of reagents pre-stored on a cassette. The process is durable, requires little user training, and needs no maintenance or additional manufacturing. Sia’s team was able to implement a second innovation to remove the need for a battery by using the audio jack for transmitting power and for data transmission. And, because audio jacks are standardized among smartphones, the dongle can be attached to any compatible smart device (including iPhones and Android phones) in a plug-and-play manner.

During the field testing in Rwanda, health care workers were given 30 minutes of training, which included a user-friendly interface to aid the user through each test, step-by-step pictorial directions, built-in timers to alert the user to next steps, and records of test results for later review. The vast majority of patients (97%) said they would recommend the dongle because of its fast turn-around time, ability to offer results for multiple diseases, and simplicity of procedure.

“Our dongle presents new capabilities for a broad range of users, from health care providers to consumers,” Sia adds. “By increasing detection of syphilis infections, we might be able to reduce deaths by 10-fold. And for large-scale screening where the dongle’s high sensitivity with few false negatives is critical, we might be able to scale up HIV testing at the community level with immediate antiretroviral therapy that could nearly stop HIV transmissions and approach elimination of this devastating disease.”

“We are really excited about the next steps in bringing this product to the market in developing countries,” he continues. “And we are equally excited about exploring how this technology can benefit patients and consumers back home.”

The study was funded by Saving Lives at Birth transition grant (USAID, Gates Foundation, Government of Norway, Grand Challenges Canada, and the World Bank) and Wallace H. Coulter Foundation.


Emergingevents Commentary: Here comes the revolution in medicine making doctors less relevant to the processes of medicine. This will happen very quickly now…..


Books and longreading: a farewell to Prometheus

By Andrey Mir writing for Human-As-Media:

Book reading as fire usage: What is more important, the result or the process?

Long reading was previously considered a way of transferring knowledge. But nowadays linear reading is becoming much shorter. The culture of information consumption is changing, along with the format of knowledge accumulation, transfer and perception. At the emotional level this is seen as cultural degradation.

But do we really need long texts for storing and transferring knowledge? Maybe it is nothing more than an old habit?

In his book The Gutenberg Galaxy: The Making of Typographic Man, Marshall McLuhan analyses a quotation from Geoffrey Chaucer according to which the status of a student in the 14th Century was gauged by the number of books he has… written. In fact, the “books” the medieval students “wrote” were notes of what their professors said. The more summaries a student made, the more highly educated he was considered. Real books were very expensive, and so the cheapest way to have a book in the prepress era was to write down what wise men said or copy wise thoughts from other [written] books. Books were the product and the certificate of one’s diligence. Universityvstudentsvstillvtake notes during lectures.

When the first books were printed, professors denounced their appearance as the degradation of education. McLuhan mentioned that professors were horrified: Imagine that students no longer need to take down notes because they can buy a book instead! They denounced this as the desecration of knowledge. It was impossible to imagine that one could acquire knowledge without spending long hours writing down every word the professors said, but simply by buying a book. Nowadays it’s like downloading a research paper from Google instead of going to a library for reference material.

In fact, you can get as much knowledge by reading a printed book as by writing down what your professor says, but quicker and in a different way.

This could be a side effect of the progress of civilization: You get the same result in a simpler way. The best example is fire and the way we use it. It is said that the use of fire is what distinguishes men from animals. Mowgli from The Jungle Book may be weaker than some animals, but he is stronger than all of them because he can handle fire. The fire which Prometheus gave to the people contrary to the gods’ will allowed the human race to rise above all others on Earth.

Today we hardly ever use open fire in our life. We don’t use open fire to warm ourselves or to cook food. Ancient men would be outraged to see that we get fire by clicking a lighter. They believed that this sacred procedure should be preceded by two days of ritual dancing and the whispering of incantation to a heap of twigs. Omitting this process was simply not the done thing, but now it is, this disregard for open fire does not cause a culture shock because its gradual withdrawal from our life took a long time. Thousands of years passed from the time when lives depended on fire to the era when people see open fire several times a year and hardly ever use it.

So culture could be said to represent the long hours of a solemn ritual involving twigs and sparks. Culture grows out of the trouble we take to acquire a desired technology. Culture is the gap between a need and its satisfaction. Technology is narrowing this gap, and in the process it affects culture, or at least culture as it was understood in the past, culture that was associated with long rituals. That’s what causes the feeling that we have lost the fundamental essentials.

But what if long reading is like making fire: no longer necessary? What if we can get a comparable result quicker, which is very important, considering the uppermost importance of time?

New forms of packaging knowledge were created by multimedia and journalism, the first profession to face the digital challenge. Moreover, social media now offer super-fast and super-simple ways of information acquisition and transfer which the long text cannot rival. The long text and thick books are still seen as attributes of the intellect and knowledge. But statistically, the new synthetic format, which can be described as fast’n’fun because it does not have a permanent carrier, is rapidly winning the battle with the long text for people’s time and attention.

In the past, the long text had the monopoly on conveying a meaning primarily for technical reasons. When information is transmitted through physical media such as paper, it is easier to systematize and store in the form of books (scrolls are a different matter; the use of scrolls or books for transferring information predetermined the path of civilization’s development).

The print-era man can easily visualize an average book, a “thin” book (an entertainment book or magazine) and a “thick” book (which is usually an intellectual book the meaning of which is difficult to understand). “Thick” volumes were the most valuable kind of books. This content hierarchy showed that valuable content was inaccessible for ordinary people (“thick” books were expensive and difficult to read). In other words, the value of the long text supported the monopoly of palaces and temples. Monopoly on the long text equaled monopoly on power.

These technical parameters gave rise to the idea that the long text/book is a sacred object. At the everyday level, this implied that thick books contained important information. This can explain the awe for the long texts. Because of the books’ format, education was a long and difficult process, and so the long text also symbolized the diligence of those who manage to read books through to the end. Diligence and the ability to focus one’s attention on any particular subject for a long time are important characteristics of the value hierarchy of the hard-copy era.

But technology has moved forward. Everything has become fast’n’fun, but the sanctity of the long text continues to govern our behavior and opinions.

The focus on the long text of the print era is waning. The multimedia are reviving the audiovisual perception of life that was prevalent in the prepress era, though at a higher level of development. Audiovisual information and even long stories are now easier to store and propagate, something which the prepress people could not do and which we learned to do thanks to the invention of paper and books.

The transition period will not be easy. There will be an inevitable loss of knowledge due to the change of formats and carriers. Not everything that could be put in the long copy can be transformed into the fast’n’fun format. Will anyone even try? The new language of culture is being used to produce a new meaning rather than to convey the past knowledge. Long reads will become like Latin, the dead language of classical knowledge.

People will find it difficult to accept this change, because for the past 5,000 years the humankind used the long text as the main carrier of knowledge. But progress logically moves towards simplification. Evolution is not a mountain you climb, but a vortex that sucks you in. You can and must resist this. I would even say that it is the duty of all people of the long-copy era. But any efforts to do so will be in vain. Books will survive as part of the vintage fashion and an element of elite consumption. But they will lose – they are already losing – their role as the main carrier of knowledge.

In a more distant future, the gadgetization of the human body will lead to the creation of a third signal system, the elements of which we can see in the growing interactivity in infographics, visual semantic objects, augmented reality, and the like. And then it will not be the word that will be the semantic carrier (actually, the word is a rather awkward intermediary between the mind and the meaning), but the directly induced emotion or sensory perception of the semantic object. Our future is being created in the experiments with induced perception at 4D and 5D cinemas, not to mention cognitive interfaces.

Is there a place for books in this future? Now Prometheus has given people a device connected to the web.


The future of working from home

 Tom Goodwin, director of The Tomorrow Group & Marketing Writer and Speaker writes:

We rarely notice it, but technology moves way faster than culture. The future of work has long been predicted to be more casual and based away from the office, yet little has changed. We still largely commute daily in order to work from the same spaces and do the same things. How do we change our approach and change the way we work in the future?

As cities swell, public transport groans under the weight of demand, train prices increase, urban house prices surge and commutes lengthen, we look again to how technology can transform the modern workplace.

It’s been a unmet future promise since the 1990’s, much like the paperless office. We’ve had Skype for years, 4G allows working from the most remote of places, the promise of working from home is always on the horizon, but never been a reality. For each company that promotes it, we’ve Yahoo and Google (of all places) look to bring workers back to their motherships.

It should obviously not be like this, most of the working environment that we construct around is a legacy of the past.

We work “working hours” which are linked to the Agrarian needs of over 10,000 years ago, where working was growing crops and daylight was needed to harvest and plant them. Lightbulbs invented over 100 years ago made this irrelevant in modern working environments, yet we still work in daylight hours obsessively, even changing our clocks twice per year to aid our “need” to work in natural light.

We have a 7 day week thanks to Babylonians who 4,000 years ago thought there were 7 heavenly bodies. Yet the 5 day working week comes from just 1908, when a New England mill to accommodate the needs of Jewish and Christian’s needs for a holy day each, coined the notion of “weekend” . It’s odd to think if the Babylonians had been able to see further, how different life would be.

We work in the same places, at the same time, presumably because the industrial revolutions had factories with set roles in production lines and needed all present continuously.

Yet none of these things make any sense any more; in a world of smartphones, lightbulbs, virtual workspaces, IM, we’re held hostage by assumptions from 4,000 years ago.

But this isn’t new, we over value the impact of technology and underestimate the impact of culture, how slow it is to change. We forget in an age of fingerprint scanning, drones, 3D printing, that we still shake hands to show we’re not carrying swords, we still chink glasses to show we’re not poisoning each other, our body language reflects behavior of cavemen, we “carbon copy” people on email, heck, some people still use Yahoo, the most primitive all all human behavior.

So how do we unbundle ourselves from this oddity, the benefits of working different hours or from working from home more are massive and well documented. They would:
1) Reduce the cost of commuting, both in terms of travel costs, energy costs and environmental destruction.
2) Reduce the wasted time and damage from stress of commuting.
3) Reduce the unhappiness of such a crappy start to the day.

But most of all working set hours from the same place, after a lengthy battle to work, with photocopies humming , hampers creativity. For me the very most unlikely place to get an idea is in an office. Put me in a museum, a weird train, let me look out of the window on a plane or overhear a conversation in McDonalds and I’ll be million times more likely to spark something.

We off course still need face to face contact, ideas are nurtured in groups, thinking needs to get small and big and be pulsed through to bigger better things with other people, but often that’s the easy part.

So with this in mind I’ve three notions that can help us unleash us from the office.

1) A change in culture – the rise of management by objectives.

For some reason we bundled ” accomplishing something” and what correlates to it which is “being at work”.

If we are truly honest with ourselves, we tend to measure how hard we have worked by how many hours we’ve been at work. Sales assistants are generally paid by the hours worked, not what they sell, personal assistants don’t get paid per email or booking, but by how long they are able stand being at work. Advertising agencies / Management Consultants /Lawyers are paid by the amount of time spent on something. It’s all totally bizarre but stays in place because we’ve never been comfortable with a better way.

Because we correlate “being at work” with doing stuff, we’ve assumed that working from home was inefficient and that people abused the system. When you expect people to behave this way they do. I’ve had friends who used “WFH” as a term for a day off, it was a chance to have a big night the evening before and merely wake at 8am to announce they were online already.

The way around this is to be able to measure productivity and set goals for accomplishments. If we were paid to simple get results, we could all know that our interests were aligned and we could make decisions about the place we need to be, in order to best get that done, which oddly enough may required being in the office, but not around arbitrary hours from the Neolithic revolution.

Now I am no management theorists, but wouldn’t it be a fun exercise to think about how you would manage people in this world. What would the objectives be that people needed to accomplish? how would you monitor it? where would be need to be day to day? what are people actually doing for your business? when would people need to be in the office? how would these hours overlap? could you offer bonuses ? what do promotions look like this this world?

Many interesting questions we should all be asking.

2) Permanent Freelancing.

The idea of a job or company for life hasn’t just faded, it’s been smashed. We may assume the future generations will be job hoppers but they will likely go beyond this to work many different careers in their lifetime and often at the same time. Anyone whose spent time hanging out in painfully trendy parts of town and who dares talk to ( or WhatsApp with) someone with a deliberately asymmetrical haircut has found that people are no longer employees but jewelry makers / music producers / fashion blogger / feng shui consultant / website designer at the same time. We’re never too sure what actually pays the bill and where success lies, but that seems rather mean spirited to question.

So how do we work with this generation, how does diverse talent come together, how do we arrange a workplace around this.

It’s not been hard for me. In my company (Tomorrow), I need the very best people on the planet, to work incredibly hard for tiny amounts of time, on super interesting projects. It’s not only way more fun this way, but a future generation of experts are drawn to short, interesting, pioneering projects that they learn from. The cost of employment is slight and I get the best talent this way. We often forget in agencies that the very best people can work anywhere. If your website design agency is in a business park in Reading, you probably won’t be getting the UI star or Designer that will set the world alight, not unless you pay them incredibly, offer them the chance to work from a Montenegrin town one week and a Indonesian beach the next and offer them something they can learn from and be proud of.

So be honest with yourself, do you want the best people in the world to fit into your system, or do you want to attract the best people and give them something they value ( which is likely to be freedom not money)

3) New Environments.

Has anyone ever actually accomplished anything from a conference call? ever? Dan is always dialing in late forcing yet another re-iteration of what the call is about, Jenny is going through airport security and can’t talk and we can’t hear the call host because their hands free is crap and they are driving too fast. In 2014, the main point of a conference call is to show that you tried.

Video calls still seem awkward, how close should you be to the camera? are they still or has the video crashed? Isn’t video a bit much? Why are they looking there?

Instant Messenger is better, but why are we spending effort to ask home someones day is? Do we have to talk about the weather? Can they tell my sarcasm? This isn’t the way to ask a favor.

We’ve just not learned how to use this technology and we’ve pretended it’s not crap. So we need to use technology better but we also need a better environment to use it in. I see a new type of home office

I see this home office as a smallish space 10ft x 10ft for example, with projected images of livestreams on each wall. We may have direct video links open always to 10-20 people and other workable information on other screens.

Each person will be in super low res and blurry in real time, but have a light showing their status. In order to open a proper “gateway” to that person in full HD and with sound, you’d need to touch the screen and they’d need to accept the call, upon which a bridge is formed and a normally face to face conversation is held.

Could this be the best of both? What is this room called? How do villages and towns of the future adopt to this new working pattern?

The New Landscape.

On recent trips on trains across the UK it blows my mind how beautiful our countryside is and how dreadful our domestic architecture is. In the USA large scale architecture is typically crap, but homes ( for the middle and above) are generally well designed, optimistic, futuristic and airy, yet in the UK it’s the opposite with wonderful public buildings, adventurous railway stations, remarkable offices and the very worst new homes our planet has ever seen.

We’ve desperate calls to develop the Greenbelt, yet the question is always yes or no, never what and how? Why do our homes look like images from a 5 year olds drawing pad? Why do we hate windows? Do we need to try to replicate architectural language from the past, without any of the reasons that existed? How would a Roman temple treat a underground car port?

I’d love to see our assumptions challenged, the Greenbelt full of “digital commuter villages”, with central community work and health centers, subterranean leisure centers, vast numbers of communicable cars to subscribe to, electric self driving buses to local rail connections, decentralized power per home and above all else homes with this new “home office”

Maybe by removing every assumption we’ve ever made, we can reimagine both working from home and our entire built environment.


6 Trends for 2017 and Beyond.

Tom Goodwin,Director of the Tomorrow Group & Marketing Writer and Speaker:

When thinking about trends for 2015 for a new post…. ( which is now published here .. ) …. I found some longer term shifts that I can feel developing and I wanted to take this chance to raise 6 themes worth consideration.

This post isn’t so much a proud proclamation of the future as a call for debate, perhaps some interesting dinner party conversation starters. What do you think?

I think the biggest changes for the next 4 years will be the following:

1) A Thinner Internet

The internet will become more seamless, more pervasive, personal and even predictive. It will spread across more devices but in thinner, more context specific layers.

From the notification layer on our phone, to “card” like app design, to apps that run invisibly in the background to wake only when required.

From fridges that on a glance show the weather, to clocks that show when we’re late with colours, to watches that tell us if we need to head right or left with a vibration. Amazon Echo as a ambient helper.

Our phones unlock in trusted places, our cars pick the coffee shop we may want, Siri, Cortana, Google now, all become personal assistants that guide us. Anticipatory or Predictive computing will be a huge development that we all talk about for the next few years as we begin to outsource our cognitive functions ( and trade privacy). Far fetched? How many phone numbers do you now know? What about birthdays?

We used to search the web, we used to go deep in, and navigate, in the near future the web bubbles up to a surface that we glance at, in more places and in less deep ways. It becomes key contextual information.

How can your business move into this thin layer, how does it become a contextual nudge or key information at the right time.

2) The post privacy age.

A generation of people simply won’t understand the concept of privacy. A generation of people who’ve grown up sharing geotagged images of their most personal moments, who’ve had every gmail read, who’ve lived with loyalty cards and financial dashboards won’t get for one second what was once possible, privacy.

Instead a generation of people will have grown up having traded it. Their Target app gave them bigger discounts, they used Facebook for free, they got retargeted ads from newspapers we once paid for.

From better healthcare from the analysis of anonymous healthcare, from more efficient smart cities from sharing user data, from thermostats that save energy by knowing where you are, or whether it’s Cortana or Google Lollypop becoming your personal assistant.

We will soon grow up in an age of near perfect information, and when we realize that when more people, know more things, there are some clear benefits, the topic won’t be about how we keep privacy but what we trade it for, where to draw the new line and how we learn to trust those with it.

What does this mean for marketers? How can they destroy assumptions about privacy, why can’t we offer more personal ads? What about more personal offers? Let’s think about how to reward people who chose to share data, it could be the new micro currency of the web.

3) The decline of the middle class in the developed world.

From Denver to Dover, Berlin to Bucharest, whether it’s the fault of the global economic downturn, quantitative easing, the internet or labor automation, it seems like a clear trend in rising income inequality and in particular the transfer of wealth upwards and it’s hard to see anything reversing this.

Will we somehow see more working and middle class jobs appearing? With the rise of automation, the global movements of talent and the rise of technology to make industry more efficient, it’s impossible to see this happening.

Will property ownership revert back to the masses? You’d be a fool to see how this can happen unless those in power stop serving their own interests.

The future “virtual” or real high street and mall from the future will be dominated by the extremes. From Burberry and Louis Vitton at the top, to the masses of bargain retailers, dollar stores, pound shops, payday loan and pawnshops of the bottom, it’s hard to see how anyone in the middle can survive. The only hope for survival are companies such as Achieve Finance and others which are ready to offer loans to people despite them having a bad credit score. As of now, they can be considered as the backbone of the middle class as despite the falling economy they still are helping people live a good and peaceful life.

The share price of Tesco, Sainsbury’s, Sears, JC Penney testifies to this. Be careful where your consumer is going. The Middle is a terrible place to be.

4) Mature Money.

Advertising and marketing have always obsessed with the young, but never more so and never more pointlessly.

Not only do the young have less influence than the media would have us believe, but they also suffer from having relatively little money and no loyalty whatsoever.

Yet the everlasting debate is about how to target and segment millennials or digital natives, and never how to target the old.

The over 50’s now have over 80% of most developed nations wealth, they have more free time, look set to live far longer, are way healthier and more engaged in brands than before. Yet the world of marketing abandons them to look at the trendy money.

Youth finances have never looked worse, youth unemployment is high, the cost of living is crippling, university fees in many countries are staggering and their future looks massively uncertain.Meanwhile the baby boomers sit on assets rocketing in value, drawing healthy pensions well into the future, and look for ways to spend it.

The trend lines are clear, so what can your business do about it?

5) Non Ownership

A lot of history is cyclical, people react and rebel against our past. For a generation of people that grew up in an age of post war rations, economic hardship, expensive electrical items, we’ve seen the reaction in the ultimate in consumer boom. We can now buy massive TV’s for less than $400, that we need not replace for years. In real terms cars and clothing are incredibly cheap, we’ve chickens for 2 dollars, the only thing that is expensive and limited is time.

A generation of people who’ve grown up with this abundance may turn against it. The most expensive and best phone in the world is $1000, the most most appropriate laptop costs the same. Armed with these devices we need not buy a 100 items they now replace. From the sharing economy making renting trendy, to a group of people unable to buy houses and that don’t see the stigma in renting, to hardware that becomes new due to software updates, to the digitization and streaming of once physical items. It could be we’re on the verge of a new type of consumerism, where armed with a past of excess, a present of limited finances and a future of resource scarcity, we chose to own fewer, better, more adaptable items.

6) Euthanasia

Sadly Humans are not built to last as long, the sort of ultimate in built-in-obsolescence and as we age, we do so asymmetrically.

When expensive modern medicine is able to keep people alive for longer, with ever diminishing returns, at what point do we accept that an aging unproductive population isn’t sustainable.

What becomes of the new retiring age? When do we agree to treatments? what constitutes action that is in the interest of the person? What does this mean for countries with government provided healthcare?

It’s a bit grim to dwell on it and the marketing implications are less clear, so just a philosophical issue to chat about and think about for the holiday dinner party season.

Hope you liked them, this is a call for debate, not a proclamation of the fixed, so what do you think? What other issues do you forsee?


Barclays Has The Best Explanation Yet Of How Solar Will Destroy America’s Electric Utilities

Silver lake coal plant

It’s been a good few decades for America’s electric utilities: As regulated monopolies, they face almost no competition and enjoy access to cheap credit.

In a new note, a Barclays team led by Y.C. Koh says the industry is finally be facing its day of reckoning, from a source many have long dismissed as an unviable pipedream: solar. Specifically, the threat is residential solar, people creating their own electricity.

To prove that the threat is real this time, Barclays is downgrading its Electric sector rating to Underweight from Market Weight “…The regulatory responses to the growing competitive threat from solar + storage may prove inadequate to address potential strains to the credit profiles of issuers in these states,” they write.

There are main two reasons why solar is finally for real, the group says. For more info about solar energy check out sources like NRG Upgrade. The first is that for more than a decade, there’s been a huge push from governments around the world, and at every level, to subsidise renewables. Bloomberg New Energy Finance (BNEF) estimates that the annual output of PV modules increased almost 30x in the past decade, from 1,000MW per year in 2005 to more than 30,000MW in 2013, Barclays notes. With that scale has come cheaper prices for panels.

Here’s what the cost curve looks like:

Solar cost curve

The second reason is the advent of cheap storage. For the past few years, homeowners have addressed renewables’ intermittency problem — the wind isn’t always blowing, the sun doesn’t always shine — by making a deal with her utility: she’ll continue to buy their electric power, but she gets to keep her solar panels running when she’s not home, and sell any excess power they generate back onto the grid. This is called net metering.

Net metering has been a boon for incentivising rooftop solar adoption. But what if you could truly power up your home through a solar-charged battery, and only have to buy utility electricity in an emergency?

As recently as 2009, the all-in costs for such batteries would been as much as $US17,000. But with the expansion of electric vehicles, Barclays says the cost of storage has been falling rapidly, and now stands at about $US3,700. And it just so happens that the power required to operate an electric vehicle can power the average home for up to three days, Barclays notes, “potentially opening a new use in residential distributed generation systems.” Battery costs could come down even further if Elon Musk’s gigafactory launches, they add. Yesterday we discussed this idea in detail. Here’s the price decline chart:

Barclays battery costs

Barclays sees the solar + storage wave has the potential to spread beyond its roots in California and Arizona. Here is their timeline for when solar costs could reach parity in all 50 states:

Barclays solar

Cheap solar panels, combined with cheap storage, will spark a grid “defection spiral” that will pry away utilities’ grip on the power monopoly. In this scenario, early adopters begin leaving the grid, incrementally increasing utilities’ power costs rise — which further exacerbate the shift into solar and storage, and so on.We are already seeing evidence of step 1, as utilities have begun complaining that solar customers are causing electricity prices for non-solar users to go up.

This is maybe the most vivid description in the note of what solar will do to utilities:

we envision an electricity market where demand for grid power falls, peak hours shift (perhaps dramatically), and regulatory mechanisms need to be adjusted or overhauled to accommodate some utilities becoming the electricity generators of last resort. We expect the net effect to be higher grid power costs (thereby exacerbating the consumer shift to solar + storage), lower average credit quality for regulated utilities and unregulated power producers, and increased recognition of the long-term threat to grid power.

Whatever roadblocks utilities try to toss up — and there’s already been plenty of tossing in the states most vulnerable to solar, further evidence of the pressures they’re facing — it’s already too late, Barclays says:

We fully expect utilities and regulators to make a good faith effort to preserve the status quo “regulatory compact,” whereby the monopoly utility provides a safe and reliable service and regulators allow it to earn a reasonable low-risk return. However, we also expect them to be playing a constant game of catch-up as solar develops. The costs of solar and storage technologies are falling quickly and may fall even faster as higher demand builds additional scale. But the cost of distribution grids and thermally generated power are more likely to rise than to fall, in our view. As a result, regulators and utilities will be constantly trying to respond to a moving target, which is precisely the environment where slow-moving incumbents can fall behind.

It’s been a good run.

If you are looking for a complete turn-key solution to generate solar energy, contact Enlyten Energy today.



Is This The Breakthrough That Brings Solar Power To The World?

The days of bulky, expensive solar panels that were heavy and required harsh chemicals to produce and a lot of labor to install may be coming to an end. Scientists in Australia have been able to produce the largest ever printed solar cells using a newly developed solar cell printer. Yes, they are printing solar cells.

The cells are flexible, cheap, and made from organic plastics and materials.

According to scientist Dr Scott Watkins, printing cells on such a large scale opens up a huge range of possibilities for pilot applications:

“There are so many things we can do with cells this size[…]We can set them into advertising signage, powering lights and other interactive elements. We can even embed them into laptop cases to provide backup power for the machine inside.”

Dr. David Jones has a few more uses in mind:

“Eventually we see these being laminated to windows that line skyscrapers. By printing directly to materials like steel, we’ll also be able to embed cells onto roofing materials.”

Image: VICOSC’s new solar cell printer installed at CSIRO.

solarprinterThese organic solar cells differ from conventional solar panels in that they are lighter, flexible, semi-transparent, and projected to be much cheaper.

The technology and manufacturing are still in the research phase, but scientists behind the project are hopeful for it’s future use for consumers. According to one of the project’s partners, CSIRO:

“The consortium is currently only purchasing materials on a research scale. When bought on a larger scale it is anticipated that component costs will be significantly lower and that pricing around A$1/W will be achievable.”

$1 per watt? Coupled with the lowered installation costs of this lighter system, that is utterly revolutionary. With these costs, a family would be able to cleanly power their home for only a few thousand dollars.

There is also an added benefit to printing solar cells over traditional manufacturing of solar panels: factories can be much smaller, therefore also much cheaper. This will allow for the production of these cells to be more decentralized making them more accessible in the developing world and the first world alike.

Is this the breakthrough that finally kills ‘Big Oil’ and brings solar power to the world? We think they are getting close.

Here’s a short video that explains these cells:

Printing Australia’s largest solar cells


How the iWatch will change advertising forever

By Tom Goodwin,

Tom Goodwin, a director of the Tomorrow Group, explains how he thinks the iWatch will change advertising forever.

Tom Goodwin: a director of the Tomorrow Group

Tom Goodwin: a director of the Tomorrow Group

It will take a tiny screen to finally wake the industry up to what the digital medium really means – a bright future of the new, and not the mindless misappropriation of advertising platforms of the past.

It’s amazing how many opportunities we’ve missed mindlessly layering old techniques onto new media. We’ve got excited about better ad targeting, improved measurement, or the opportunity to do real-time marketing, but the vast majority of digital advertising innovation today still comes from the placement of ads and not the message. What an incredible waste.

In fact, the entire ecosystem of contemporary advertising is an absurdly unimaginative recycling of what we knew how to make, buy, measure and sell before. We’ve been stuck in a cycle of technological determinism, framing future possibilities based on previous technologies, and adapting models and structures of the past, to fit the form of the present

When the worldwide web presented us with space to convey brand messages, we did what we knew best: we mindlessly copied newspapers ads, digitised them and in a moment of genius, made them clickable.

When websites allowed page takeovers, we filled those pages with what we knew best (direct marketing) but we made it clickable.

The first brand websites became online brochures, but digitised and made clickable.

When bandwidth made branded videos possible, we just put on TV commercials.

Despite having access to over $10 billion of R&D budgets from new media owners like Facebook, Yahoo, and Twitter, we still see print ads in digital editions of magazines, we see flyers used as ads on Instagram, TV ads in Facebook feeds, and mobile ads that are essentially just tiny, electronic versions of newspaper ads from the 1750s.

The only thing we’ve invented is new terminology. We now have advertorials online, but we call them native ads, we have advertiser-funded TV, but we call it content marketing. We have had tactical print ads, but we now call it real-time marketing. Every single tool in advertisers’ arsenal today has been available since the 1960s.


The digital world is a world of incredible abundance – it’s got the biggest creative canvas, the most incredible technology, huge optimism and boundless funding.It’s the best thing that could ever happen to advertising, and yet we all started lazily with what we knew best. Things will soon change. For the first time ever we will start to do what we do best, to think originally and to solve problems with creativity.

We’ve now reached a stage in which new technologies are forcing a paradigm shift, and it starts with the iWatch. When the digital medium becomes an extension of our senses, our minds, and our bodies, as [Canadian philosopher Marshall] McLuhan predicted, we have to consider not what we have, but what is possible.

When we’re faced with a new platform, with a set of unique characteristics, we have to rethink our approach to advertising: from pushing messages out to consumers based on past models of communication, to finding ways to add value and service, based on behaviour, needs and habits, and enhance their lives.

The future of modern marketing lies in starting afresh and seeing past pre-determined ideas about media platforms or advertising. For brands, the future is not in catchphrases like “real-time” or “data”. It’s about asking questions and exploring new territories:

  • How do you work with route guidance apps or GPS?
  • How do you link with data from social networks, based on proximity?
  • How should you use audio signals, NFC capabilities, or health inputs?
  • How are you leveraging real-time data like current, the news or the stock market?
  • How are you incorporating mobile coupons, or integrating with our calendars?
  • What becomes of the rise of the invisible app, how can your brand run as an assistive, typically invisible layer?

Maybe it’s about providing the information that you are located near a train station when the traffic is bad.

Maybe it’s the flash sale in the store you are passing, the beer voucher as your friend is close by, the suggested happy soundtrack to your can of Coke, the Gatorade after your gym session.

SmartWatches may or may not be here soon, and they may or may not work, but regardless, as a concept, they stand for what advertising becomes when the true power of technology, new media opportunities and creative thinking come together.

The future of digital advertising won’t look like advertising.

This article was first published on

The open source revolution is coming and it will conquer the 1% – ex CIA spy

The man who trained more than 66 countries in open source methods calls for re-invention of intelligence to re-engineer Earth
 A businessman tries to break through a line of Occupy Wall Street protesters who had blocked access to the New York Stock Exchange area in November 2011.
A businessman tries to break through a line of Occupy Wall Street protesters who had blocked access to the New York Stock Exchange area in November 2011. Photograph: Don Emmert/AFP/Getty Images

Robert David Steele, former Marine, CIA case officer, and US co-founder of the US Marine Corps intelligence activity, is a man on a mission. But it’s a mission that frightens the US intelligence establishment to its core.
With 18 years experience working across the US intelligence community, followed by 20 more years in commercial intelligence and training, Steele’s exemplary career has spanned almost all areas of both the clandestine world.

Steele started off as a Marine Corps infantry and intelligence officer. After four years on active duty, he joined the CIA for about a decade before co-founding the Marine Corps Intelligence Activity, where he was deputy director. Widely recognised as the leader of the Open Source Intelligence (OSINT) paradigm, Steele went on to write the handbooks on OSINT for NATO, the US Defense Intelligence Agency and the U.S. Special Operations Forces. In passing, he personally trained 7,500 officers from over 66 countries.

In 1992, despite opposition from the CIA, he obtained Marine Corps permission to organise a landmark international conference on open source intelligence – the paradigm of deriving information to support policy decisions not through secret activities, but from open public sources available to all. The conference was such a success it brought in over 620 attendees from the intelligence world.

But the CIA wasn’t happy, and ensured that Steele was prohibited from running a second conference. The clash prompted him to resign from his position as second-ranking civilian in Marine Corps intelligence, and pursue the open source paradigm elsewhere. He went on to found and head up the Open Source Solutions Network Inc. and later the non-profit Earth Intelligence Network which runs the Public Intelligence Blog.

Robert David Steele
Former CIA spy and Open Source Intelligence pioneer, Robert David Steele speaking at the Inter-American Defense Board in 2013

I first came across Steele when I discovered his Amazon review of my third book, The War on Truth: 9/11, Disinformation and the Anatomy of Terrorism. A voracious reader, Steele is the number 1 Amazon reviewer for non-fiction across 98 categories. He also reviewed my latest book, A User’s Guide to the Crisis of Civilization, but told me I’d overlooked an important early work – ‘A More Secure World: Our Shared Responsibility, Report of the UN High-Level Panel on Threats, Challenges, and Change.’

Last month, Steele presented a startling paper at the Libtech conference in New York, sponsored by the Internet Society and Reclaim. Drawing on principles set out in his latest book, The Open-Source Everything Manifesto: Transparency, Truth and Trust, he told the audience that all the major preconditions for revolution – set out in his 1976 graduate thesis – were now present in the United States and Britain.

Steele’s book is a must-read, a powerful yet still pragmatic roadmap to a new civilisational paradigm that simultaneously offers a trenchant, unrelenting critique of the prevailing global order. His interdisciplinary ‘whole systems’ approach dramatically connects up the increasing corruption, inefficiency and unaccountability of the intelligence system and its political and financial masters with escalating inequalities and environmental crises. But he also offers a comprehensive vision of hope that activist networks like Reclaim are implementing today.

“We are at the end of a five-thousand-year-plus historical process during which human society grew in scale while it abandoned the early indigenous wisdom councils and communal decision-making,” he writes in The Open Source Everything Manifesto. “Power was centralised in the hands of increasingly specialised ‘elites’ and ‘experts’ who not only failed to achieve all they promised but used secrecy and the control of information to deceive the public into allowing them to retain power over community resources that they ultimately looted.”

Today’s capitalism, he argues, is inherently predatory and destructive:

“Over the course of the last centuries, the commons was fenced, and everything from agriculture to water was commoditised without regard to the true cost in non-renewable resources. Human beings, who had spent centuries evolving away from slavery, were re-commoditised by the Industrial Era.”

Open source everything, in this context, offers us the chance to build on what we’ve learned through industrialisation, to learn from our mistakes, and catalyse the re-opening of the commons, in the process breaking the grip of defunct power structures and enabling the possibility of prosperity for all.

“Sharing, not secrecy, is the means by which we realise such a lofty destiny as well as create infinite wealth. The wealth of networks, the wealth of knowledge, revolutionary wealth – all can create a nonzero win-win Earth that works for one hundred percent of humanity. This is the ‘utopia’ that Buckminster Fuller foresaw, now within our reach.”

The goal, he concludes, is to reject:

“… concentrated illicitly aggregated and largely phantom wealth in favor of community wealth defined by community knowledge, community sharing of information, and community definition of truth derived in transparency and authenticity, the latter being the ultimate arbiter of shared wealth.”

Despite this unabashedly radical vision, Steele is hugely respected by senior military intelligence experts across the world. As a researcher at the US Army War College’s Strategic Studies Institute, he has authored several monographs advocating the need for open source methods to transform the craft of intelligence. He has lectured to the US State Department and Department of Homeland Security as well as National Security Councils in various countries, and his new book has received accolades from senior intelligence officials across multiple countries including France and Turkey.

Yet he remains an outspoken critic of US intelligence practices and what he sees as their integral role in aggravating rather than ameliorating the world’s greatest threats and challenges.

This week, I had the good fortune of being able to touch base with Steele to dig deeper into his recent analysis of the future of US politics in the context of our accelerating environmental challenges. The first thing I asked him was where he sees things going over the next decade, given his holistic take.

“Properly educated people always appreciate holistic approaches to any challenge. This means that they understand both cause and effect, and intertwined complexities,” he said. “A major part of our problem in the public policy arena is the decline in intelligence with integrity among key politicians and staff at the same time that think tanks and universities and non-governmental organisations have also suffered a similar intellectual diminishment.

“My early graduate education was in the 1970’s when Limits to Growth and World Federalism were the rage. Both sought to achieve an over-view of systemic challenges, but both also suffered from the myth of top-down hubris. What was clear in the 1970s, that has been obscured by political and financial treason in the past half-century, is that everything is connected – what we do in the way of paving over wetlands, or in poisoning ground water ‘inadvertently’ because of our reliance on pesticides and fertilisers that are not subject to the integrity of the ‘Precautionary Principle,’ ultimately leads to climate catastrophes that are acts of man, not acts of god.”

He points me to his tremendous collection of reviews of books on climate change, disease, environmental degradation, peak oil, and water scarcity. “I see five major overlapping threats on the immediate horizon,” he continues. “They are all related: the collapse of complex societies, the acceleration of the Earth’s demise with changes that used to take 10,000 years now taking three or less, predatory or shock capitalism and financial crime out of the City of London and Wall Street, and political corruption at scale, to include the west supporting 42 of 44 dictators. We are close to multiple mass catastrophes.”

What about the claim that the US is on the brink of revolution? “Revolution is overthrow – the complete reversal of the status quo ante. We are at the end of centuries of what Lionel Tiger calls ‘The Manufacture of Evil,’ in which merchant banks led by the City of London have conspired with captive governments to concentrate wealth and commoditise everything including humans. What revolution means in practical terms is that balance has been lost and the status quo ante is unsustainable. There are two ‘stops’ on greed to the nth degree: the first is the carrying capacity of Earth, and the second is human sensibility. We are now at a point where both stops are activating.”

Robert Steele - preconditions for revolution
Former CIA officer’s matrix on the preconditions for revolution

It’s not just the US, he adds. “The preconditions of revolution exist in the UK, and most western countries. The number of active pre-conditions is quite stunning, from elite isolation to concentrated wealth to inadequate socialisation and education, to concentrated land holdings to loss of authority to repression of new technologies especially in relation to energy, to the atrophy of the public sector and spread of corruption, to media dishonesty, to mass unemployment of young men and on and on and on.”

So why isn’t it happening yet?
“Preconditions are not the same as precipitants. We are waiting for our Tunisian fruit seller. The public will endure great repression, especially when most media outlets and schools are actively aiding the repressive meme of ‘you are helpless, this is the order of things.’ When we have a scandal so powerful that it cannot be ignored by the average Briton or American, we will have a revolution that overturns the corrupt political systems in both countries, and perhaps puts many banks out of business. Vaclav Havel calls this ‘The Power of the Powerless.’ One spark, one massive fire.”

But we need more than revolution, in the sense of overthrow, to effect change, surely. How does your manifesto for ‘open source everything’ fit into this? “The west has pursued an industrialisation path that allows for the privatisation of wealth from the commons, along with the criminalisation of commons rights of the public, as well as the externalisation of all true costs. Never mind that fracking produces earthquakes and poisons aquifers – corrupt politicians at local, state or province, and national levels are all too happy to take money for looking the other way. Our entire commercial, diplomatic, and informational systems are now cancerous. When trade treaties have secret sections – or are entirely secret – one can be certain the public is being screwed and the secrecy is an attempt to avoid accountability. Secrecy enables corruption. So also does an inattentive public enable corruption.”

Is this a crisis of capitalism, then? Does capitalism need to end for us to resolve these problems? And if so, how? “Predatory capitalism is based on the privatisation of profit and the externalisation of cost. It is an extension of the fencing of the commons, of enclosures, along with the criminalisation of prior common customs and rights. What we need is a system that fully accounts for all costs. Whether we call that capitalism or not is irrelevant to me. But doing so would fundamentally transform the dynamic of present day capitalism, by making capital open source. For example, and as calculated by my colleague JZ Liszkiewicz, a white cotton T-shirt contains roughly 570 gallons of water, 11 to 29 gallons of fuel, and a number of toxins and emissions including pesticides, diesel exhaust, and heavy metals and other volatile compounds – it also generally includes child labor. Accounting for those costs and their real social, human and environmental impacts has totally different implications for how we should organise production and consumption than current predatory capitalism.”

So what exactly do you mean by open source everything? “We have over 5 billion human brains that are the one infinite resource available to us going forward. Crowd-sourcing and cognitive surplus are two terms of art for the changing power dynamic between those at the top that are ignorant and corrupt, and those across the bottom that are attentive and ethical. The open source ecology is made up of a wide range of opens – open farm technology, open source software, open hardware, open networks, open money, open small business technology, open patents – to name just a few. The key point is that they must all develop together, otherwise the existing system will isolate them into ineffectiveness. Open data is largely worthless unless you have open hardware and open software. Open government demands open cloud and open spectrum, or money will dominate feeds and speeds.”

Robert Steele
Robert Steele’s vision for open source systems

On 1st May, Steele sent an open letter to US vice president Joe Biden requesting him to consider establishing an Open Source Agency that would transform the operation of the intelligence community, dramatically reduce costs, increasing oversight and accountability, while increasing access to the best possible information to support holistic policy-making. To date, he has received no response.

I’m not particularly surprised. Open source everything pretty much undermines everything the national security state stands for. Why bother even asking vice president Biden to consider it? “The national security state is rooted in secrecy as a means of avoiding accountability. My first book, On Intelligence: Spies and Secrecy in an Open World – which by the way had a foreword from Senator David Boren, the immediate past chairman of the Senate Select Committee for Intelligence – made it quite clear that the national security state is an expensive, ineffective monstrosity that is simply not fit for purpose. In that sense, the national security state is it’s own worst enemy – it’s bound to fail.”

Given his standing as an intelligence expert, Steele’s criticisms of US intelligence excesses are beyond scathing – they are damning. “Most of what is produced through secret methods is not actually intelligence at all. It is simply secret information that is, most of the time, rather generic and therefore not actually very useful for making critical decisions at a government level. The National Security Agency (NSA) has not prevented any terrorist incidents. CIA cannot even get the population of Syria correct and provides no intelligence – decision-support – to most cabinet secretaries, assistant secretaries, and department heads. Indeed General Tony Zinni, when he was commander in chief of the US Central Command as it was at war, is on record as saying that he received, ‘at best,’ a meagre 4% of what he needed to know from secret sources and methods.”

So does open source mean you are calling for abolition of intelligence agencies as we know them, I ask. “I’m a former spy and I believe we still need spies and secrecy, but we need to redirect the vast majority of the funds now spent on secrecy toward savings and narrowly focused endeavors at home. For instance, utterly ruthless counterintelligence against corruption, or horrendous evils like paedophilia.

“Believe it or not, 95% of what we need for ethical evidence-based decision support cannot be obtained through the secret methods of standard intelligence practices. But it can be obtained quite openly and cheaply from academics, civil society, commerce, governments, law enforcement organisations, the media, all militaries, and non-governmental organisations. An Open Source Agency, as I’ve proposed it, would not just meet 95% of our intelligence requirements, it would do the same at all levels of government and carry over by enriching education, commerce, and research – it would create what I called in 1995 a ‘Smart Nation.’

“The whole point of Open Source Everything is to restore public agency. Open Source is the only form of information and information technology that is affordable to the majority, interoperable across all boundaries, and rapidly scalable from local to global without the curse of overhead that proprietary corporations impose.”

Robert Steele
Robert Steele’s graphic on open source systems thinking

It’s clear to me that when Steele talks about intelligence as ‘decision-support,’ he really does intend that we grasp “all information in all languages all the time” – that we do multidisciplinary research spanning centuries into the past as well as into the future. His most intriguing premise is that the 1% are simply not as powerful as they, and we, assume them to be. “The collective buying power of the five billion poor is four times that of the one billion rich according to the late Harvard business thinker Prof C. K. Prahalad – open source everything is about the five billion poor coming together to reclaim their collective wealth and mobilise it to transform their lives. There is zero chance of the revolution being put down. Public agency is emergent, and the ability of the public to literally put any bank or corporation out of business overnight is looming. To paraphrase Abe Lincoln, you cannot screw all of the people all of the time. We’re there. All we lack is a major precipitant – our Tunisian fruit seller. When it happens the revolution will be deep and lasting.”

The Arab spring analogy has its negatives. So far, there really isn’t much to root for. I want to know what’s to stop this revolution from turning into a violent, destructive mess. Steele is characteristically optimistic. “I have struggled with this question. What I see happening is an end to national dictat and the emergence of bottom-up clarity, diversity, integrity, and sustainability. Individual towns across the USA are now nullifying federal and state regulations – for example gag laws on animal cruelty, blanket permissions for fracking. Those such as my colleague Parag Khanna that speak to a new era of city-states are correct in my view. Top down power has failed in a most spectacular manner, and bottom-up consensus power is emergent. ‘Not in my neighborhood’ is beginning to trump ‘Because I say so.’ The one unlimited resource we have on the planet is the human brain – the current strategy of 1% capitalism is failing because it is killing the Golden Goose at multiple levels. Unfortunately, the gap between those with money and power and those who actually know what they are talking about has grown catastrophic. The rich are surrounded by sycophants and pretenders whose continued employment demands that they not question the premises. As Larry Summers lectured Elizabeth Warren, ‘insiders do not criticise insiders.'”

But how can activists actually start moving toward the open source vision now? “For starters, there are eight ‘tribes’ that among them can bring together all relevant information: academia, civil society including labor unions and religions, commerce especially small business, government especially local, law enforcement, media, military, and non-government/non-profit. At every level from local to global, across every mission area, we need to create stewardship councils integrating personalities and information from all eight tribes. We don’t need to wait around for someone else to get started. All of us who recognise the vitality of this possibility can begin creating these new grassroots structures from the bottom-up, right now.”

So how does open source everything have the potential to ‘re-engineer the Earth’? For me, this is the most important question, and Steele’s answer is inspiring. “Open Source Everything overturns top-down ‘because I say so at the point of a gun’ power. Open Source Everything makes truth rather than violence the currency of power. Open Source Everything demands that true cost economics and the indigenous concept of ‘seventh generation thinking’ – how will this affect society 200 years ahead – become central. Most of our problems today can be traced to the ascendance of unilateral militarism, virtual colonialism, and predatory capitalism, all based on force and lies and encroachment on the commons. The national security state works for the City of London and Wall Street – both are about to be toppled by a combination of Eastern alternative banking and alternative international development capabilities, and individuals who recognise that they have the power to pull their money out of the banks and not buy the consumer goods that subsidise corruption and the concentration of wealth. The opportunity to take back the commons for the benefit of humanity as a whole is open – here and now.”

For Steele, the open source revolution is inevitable, simply because the demise of the system presided over by the 1% cannot be stopped – and because the alternatives to reclaiming the commons are too dismal to contemplate. We have no choice but to step up.

“My motto, a play on the CIA motto that is disgraced every day, is ‘the truth at any cost lowers all other costs'”, he tells me. “Others wiser than I have pointed out that nature bats last. We are at the end of an era in which lies can be used to steal from the public and the commons. We are at the beginning of an era in which truth in public service can restore us all to a state of grace.”

Dr. Nafeez Ahmed is an international security journalist and academic. He is the author of A User’s Guide to the Crisis of Civilization: And How to Save It, and the forthcoming science fiction thriller, ZERO POINT. ZERO POINT is set in a near future following a Fourth Iraq War. Follow Ahmed on Facebook and Twitter.


It’s simple. If we can’t change our economic system, our number’s up

By George Monbiot

It’s the great taboo of our age – and the inability to discuss the pursuit of perpetual growth will prove humanity’s undoing
'The mother narrative to all this is carbon-fuelled expansion. Our ideologies are mere subplots.'
‘The mother narrative to all this is carbon-fuelled expansion. Our ideologies are mere subplots.’unga Photograph: Alamy

Let us imagine that in 3030BC the total possessions of the people of Egypt filled one cubic metre. Let us propose that these possessions grew by 4.5% a year. How big would that stash have been by the Battle of Actium in 30BC? This is the calculation performed by the investment banker Jeremy Grantham.

Go on, take a guess. Ten times the size of the pyramids? All the sand in the Sahara? The Atlantic ocean? The volume of the planet? A little more? It’s 2.5 billion billion solar systems. It does not take you long, pondering this outcome, to reach the paradoxical position that salvation lies in collapse.

To succeed is to destroy ourselves. To fail is to destroy ourselves. That is the bind we have created. Ignore if you must climate change, biodiversity collapse, the depletion of water, soil, minerals, oil; even if all these issues miraculously vanished, the mathematics of compound growth make continuity impossible.

Economic growth is an artefact of the use of fossil fuels. Before large amounts of coal were extracted, every upswing in industrial production would be met with a downswing in agricultural production, as the charcoal or horse power required by industry reduced the land available for growing food. Every prior industrial revolution collapsed, as growth could not be sustained. But coal broke this cycle and enabled – for a few hundred years – the phenomenon we now call sustained growth.

It was neither capitalism nor communism that made possible the progress and pathologies (total war, the unprecedented concentration of global wealth, planetary destruction) of the modern age. It was coal, followed by oil and gas. The meta-trend, the mother narrative, is carbon-fuelled expansion. Our ideologies are mere subplots. Now, with the accessible reserves exhausted, we must ransack the hidden corners of the planet to sustain our impossible proposition.

On Friday, a few days after scientists announced that the collapse of the west Antarctic ice sheet is now inevitable, the Ecuadorean government decided to allow oil drilling in the heart of the Yasuni national park. It had made an offer to other governments: if they gave it half the value of the oil in that part of the park, it would leave the stuff in the ground. You could see this as either blackmail or fair trade. Ecuador is poor, its oil deposits are rich. Why, the government argued, should it leave them untouched without compensation when everyone else is drilling down to the inner circle of hell? It asked for $3.6bn and received $13m. The result is that Petroamazonas, a company with a colourful record of destruction and spills, will now enter one of the most biodiverse places on the planet, in which a hectare of rainforest is said to contain more species than exist in the entire continent of North America.

Almost 45% of the Yasuni national park is overlapped by oil concessions.
Yasuni national park. Murray Cooper/Minden Pictures/Corbis

The UK oil firm Soco is now hoping to penetrate Africa’s oldest national park, Virunga, in the Democratic Republic of Congo; one of the last strongholds of the mountain gorilla and the okapi, of chimpanzees and forest elephants. In Britain, where a possible 4.4 billion barrels of shale oil has just been identified in the south-east, the government fantasises about turning the leafy suburbs into a new Niger delta. To this end it’s changing the trespass laws to enable drilling without consent and offering lavish bribes to local people. These new reserves solve nothing. They do not end our hunger for resources; they exacerbate it.

The trajectory of compound growth shows that the scouring of the planet has only just begun. As the volume of the global economy expands, everywhere that contains something concentrated, unusual, precious, will be sought out and exploited, its resources extracted and dispersed, the world’s diverse and differentiated marvels reduced to the same grey stubble.

Some people try to solve the impossible equation with the myth of dematerialisation: the claim that as processes become more efficient and gadgets are miniaturised, we use, in aggregate, fewer materials. There is no sign that this is happening. Iron ore production has risen 180% in 10 years. The trade body Forest Industries tells us that “global paper consumption is at a record high level and it will continue to grow”. If, in the digital age, we won’t reduce even our consumption of paper, what hope is there for other commodities?

Look at the lives of the super-rich, who set the pace for global consumption. Are their yachts getting smaller? Their houses? Their artworks? Their purchase of rare woods, rare fish, rare stone? Those with the means buy ever bigger houses to store the growing stash of stuff they will not live long enough to use. By unremarked accretions, ever more of the surface of the planet is used to extract, manufacture and store things we don’t need. Perhaps it’s unsurprising that fantasies about colonising space – which tell us we can export our problems instead of solving them – have resurfaced.

As the philosopher Michael Rowan points out, the inevitabilities of compound growth mean that if last year’s predicted global growth rate for 2014 (3.1%) is sustained, even if we miraculously reduced the consumption of raw materials by 90%, we delay the inevitable by just 75 years. Efficiency solves nothing while growth continues.

The inescapable failure of a society built upon growth and its destruction of the Earth’s living systems are the overwhelming facts of our existence. As a result, they are mentioned almost nowhere. They are the 21st century’s great taboo, the subjects guaranteed to alienate your friends and neighbours. We live as if trapped inside a Sunday supplement: obsessed with fame, fashion and the three dreary staples of middle-class conversation: recipes, renovations and resorts. Anything but the topic that demands our attention.

Statements of the bleeding obvious, the outcomes of basic arithmetic, are treated as exotic and unpardonable distractions, while the impossible proposition by which we live is regarded as so sane and normal and unremarkable that it isn’t worthy of mention. That’s how you measure the depth of this problem: by our inability even to discuss it.

Twitter: @georgemonbiot. A fully referenced version of this article can be found at



What Does Moore’s Law Mean For the Rest of Society?



By Clay Rawlings and Rob Bencini

Technology is advancing exponentially. Beware the disruptions to legal systems, society, and the economy, warn the authors of Pardon the Disruption.

Stare decisis is the legal principle that requires judges to respect precedents set by prior court rulings. It forms the heart of the U.S. judicial system—and it forces the law to move like a glacier. This can be a problem when technologies are changing our lives as quickly as they are now.

The legal system places great emphasis on the idea that people know the laws in advance, so they can engage in commerce knowing that courts will enforce a contract as the parties intended. The law was to be a tool of fairness, not a trap for the unwary. Massive changes in technology have complicated citizens’ ability to stay ahead of changes in the law.

Patents, for example, were intended to give protection to inventors, spurring their innovation. But in the twenty-first century, we see large corporations buying up thousands of patents to ward off competition. When new products are being considered, the possibility of a patent fight over intellectual property looms large. Apple recently won a billion-dollar judgment against Samsung for infringing on their patents on the iPhone. While not a knockout blow to Samsung, this will make smaller manufacturers think twice before entering the cell phone arena. What was meant as a defensive measure—protection of inventors’ ideas—has now become an offensive weapon. Buy enough patents, and you can force out any competition by alleging they have infringed on your arsenal of patents.

Genome companies are obtaining patents as fast as possible to freeze out competitors in whole areas of genomics. In June 2013, the U.S. Supreme Court ruled that natural human genes cannot be patented, but the legal landscape for genetic patenting is far from settled. The privacy of a person’s genetic makeup—what can be altered to lessen human suffering or improve performance—cropped up as a legal issue overnight. The modern legal system is not suited to fashioning remedies for novel problems that are changing at exponential rates.

The Technology of Truth?

Functional magnetic resonance imaging (fMRI)—a technique for measuring and mapping brain activity—allows psychologists to observe the brain as it functions in real time. Two companies, No Lie MRI Inc. and Cephos Corporation, claim that they can use fMRI to determine conclusively whether or not an individual is telling the truth.

The brain stores information in memory and is also capable of creating fantasies. The premise for a brain scan lie detector test is quite simple: You are placed under the fMRI scanning device, and while you answer questions, the scanner observes brain function to determine where the response originates. When you’re telling the truth, the brain accesses the area where memory is stored. When you’re lying, the brain uses the part responsible for generating fantasies.

This methodology should be foolproof: You either have a real memory, or you do not. If your answer is based in fantasy rather than memory, it is almost certainly a lie.

The scanning technology itself will become more accurate, and so will the algorithms that analyze brain function. At some point, this technology may replace random groups of 12 jurors as the “finders of fact.” We will know with certainty whether someone is telling the truth.

No one has the right to lie while testifying in court. If technology can tell us with scientific certainty whether a person is telling the truth, why not place a scanner above the witness stand? As witnesses testify, the court will be able to see in real time whether or not the testimony is true.

Protecting us from lying defendants is one potential benefit of scanning technology, but defendants and witnesses aren’t the only ones this technology will affect. The police, too, will no longer be able to fabricate probable cause. The court and jury will know whether a cop’s alleged probable cause is coming from memory or is a clever fabrication meant to deceive.

The exclusionary rule—that evidence collected in violation of a defendant’s rights is inadmissible—would suddenly have real teeth. By making the system completely truthful in every respect, we would initially free a large number of guilty criminals, their convictions overturned due to police misconduct. It would not take long for police to realize they could no longer cheat on probable cause.

Sharing the Roads with Robotic Vehicles

The proliferation of vehicles operated by machine intelligence will disrupt virtually every aspect of our economy. Robotic cars will be available to the ordinary consumer, will drive themselves on public roads, and will completely replace traditional motor vehicles. Each year, there are approximately 33,000 fatal accidents in the United States, according to the National Highway Transportation Safety Administration. Numbers of deaths and catastrophic injuries will decrease radically when machine intelligence is in control of all transportation.

We will have no accidents caused by human drivers who are drunk, fatigued, angry, distracted, in a hurry, too young, too old, reckless, suicidal, or just plain stupid. Professional drivers who operate 18-wheelers and delivery vans will also be replaced. Commercial vehicles will never need a per diem, health insurance, or payment per mile. An entire industry of professional drivers will go the way of the blacksmith.

As a result, lawyers specializing in motor vehicle accident cases will move into other areas of the law. When there is no offense, there is no need for defense. Technology will remove the drunk drivers from the streets, relieve overcrowded jails, and clear overburdened court dockets. With robotic cars, everyone’s in the passenger seat. We will have engineered the ultimate designated driver.

In the United States, about 5 million cars a year are totaled in crashes and need to be replaced. When vehicles are operated optimally by intelligent machines, collisions will decrease significantly. And with machine intelligence in control, we won’t abuse our cars by riding their brakes or jerkily accelerating and decelerating. With less to repair or replace, collision centers and parts suppliers will close, and manufacturing will suffer.

While collisions will be dramatically reduced, people will still occasionally be seriously injured and killed. Since the operation of the vehicle is autonomous, a fault-based tort system would exonerate the owner, who was nothing more than a passenger. If the manufacturer knows at the inception that it will be held legally responsible for all injuries emanating from the failure of its product, it can build that risk into the price of the robotic car. The system will function in a manner that both protects the public and is fair to the manufacturer.

Federal governments could enact legislation requiring a black box in all vehicles, so that each failure can easily be determined. The industry could spread the risk associated with human injury and death by requiring each manufacturer to insure against these risks. This is no different from the current system requiring each operator of a motor vehicle to be insured.

Keep in mind, though, that the actual costs are going to be massively lower than the current system. Insuring human drivers who get distracted, angry, or drunk is an expensive proposition. Because the issue of liability will no longer be litigated, the only real question will be the amount of damages to the victim. Litigation costs would be cut to less than half their current amount without the necessity of proving fault. It would make compensation of the injured both more reliable and more uniform. If we’re looking for a win–win, this is it.

But What about the Jobs! Jobs! Jobs!

Exponential improvements in technology have created new wealth in recent years. Now, for the first time in history, it isn’t land that matters so much in wealth creation as it is innovation—innovation that increases productivity and raises the standard of living throughout the developed world. Continuously advancing technology is disrupting many of the very industries it helped build—and in many cases, those rapid advances are proving to be real net job killers. Disruptions have swept through industries like publishing, music, retail, and manufacturing.

Many theorists admonish local business and government leaders to “do something” about the economy, such as attract manufacturing and entrepreneurship in order to create jobs. But pushing the rope of artificially creating entrepreneurship, a creative class, and cluster development is not working.

Former North Carolina Governor Bev Perdue, almost accidentally, may have shown a fuller understanding of the true reality of productivity than most elected officials have. As the then-newly inaugurated governor in 2009, she was a featured speaker at the North Carolina Economic Developers Association conference, where she expressed supreme optimism about the areas of the economy that she believed would be central to North Carolina’s economic future: green industry, the military, and aeronautics.

She continued her remarks with a matter-of-fact assessment by saying, “I believe the textile industry in North Carolina can still thrive. They might have to cut the workforce to increase efficiency and profitability, but.…”

She said it! She said what every business in America has said for the last six years. Workers, with their rising health care and other costs; workers, who represent a huge percentage of business costs and unproductive overhead during tough times; workers, who are the human measure of these “jobs” that elected officials promote; workers, who represent the biggest cost to virtually every company; yes, workers may have to be cut in order for a company to survive and prosper. Businesses are charged with making profits (and in this economy, surviving). Their disposition toward job creation is, “You’ve gotta be kidding. I’m trying to stay in business.”

The next time you think about job creation, try a little word exchange: Replace the word jobs with the term payroll expense. Try it and see how it feels to say this: “We need more payroll expense!” or “Why haven’t you created more payroll expense?!” It sounds weird, doesn’t it?

That’s what is truly relevant, because that’s how a potential employer sees the labor force. If a company is in survival mode, its goal is to increase profitability, not to create jobs.

The disconnect between governmental goals of creating jobs through spending and other stimulus and the virtually opposite goals of those who are expected to do the heavy lifting that solves the unemployment problem (the private sector) is undoubtedly the most confounding economic enigma today.

But what if governments could pull the private sector into providing jobs as a way of promoting the social good? Companies that have benefited from technological advancements that increase productivity could employ people to provide good-deed public services. As economics professor Bill Watkins wrote for, “It turns out that a job costs less than dependency, and that’s why we need economic growth. Jobs and opportunity provide us with some things that consumption can’t. I think those are pride, dignity, and purpose.”

The new technologies that once created new industries and new jobs are now only creating new productivity without the jobs. Computers, robots, artificial general intelligence, and other technological advances have changed the economic game. From a business point of view, improved productivity is good; but from the point of view of public officials desperate to create jobs for their constituents, not so much. This may be the biggest disruption we face.

About the Authors

Clay Rawlings is a personal injury litigation attorney from Houston, Texas, and a former assistant district attorney for Harris County, Texas.

Rob Bencini, MBA, is a Certified Economic Developer (CEcD) and economic futurist from Greensboro, North Carolina.

Rawlings and Bencini are co-authors (with James Randall Smith) of Pardon the Disruption: The Future You Never Saw Coming (Wasteland Press, 2013). This article is a preview of their presentation at WorldFuture 2014: What If, the World Future Society’s conference to be held July 11-13 in Orlando, Florida.


Barclays Has The Best Explanation Yet Of How Solar Will Destroy America’s Electric Utilities

Silver lake coal plant

It’s been a good few decades for America’s electric utilities: As regulated monopolies, they face almost no competition and enjoy access to cheap credit.

In a new note, a Barclays team led by Y.C. Koh says the industry is finally be facing its day of reckoning, from a source many have long dismissed as an unviable pipedream: solar. Specifically, the threat is residential solar, people creating their own electricity.

To prove that the threat is real this time, Barclays is downgrading its Electric sector rating to Underweight from Market Weight “…The regulatory responses to the growing competitive threat from solar + storage may prove inadequate to address potential strains to the credit profiles of issuers in these states,” they write.

There are main two reasons why solar is finally for real, the group says. The first is that for more than a decade, there’s been a huge push from governments around the world, and at every level, to subsidise renewables. Bloomberg New Energy Finance (BNEF) estimates that the annual output of PV modules increased almost 30x in the past decade, from 1,000MW per year in 2005 to more than 30,000MW in 2013, Barclays notes. With that scale has come cheaper prices for panels.

Here’s what the cost curve looks like:

Solar cost curve

The second reason is the advent of cheap storage. For the past few years, homeowners have addressed renewables’ intermittency problem — the wind isn’t always blowing, the sun doesn’t always shine — by making a deal with her utility: she’ll continue to buy their electric power, but she gets to keep her solar panels running when she’s not home, and sell any excess power they generate back onto the grid. This is called net metering.

Net metering has been a boon for incentivizing rooftop solar adoption. But what if you could truly power up your home through a solar-charged battery, and only have to buy utility electricity in an emergency?

As recently as 2009, the all-in costs for such batteries would been as much as $US17,000. But with the expansion of electric vehicles, Barclays says the cost of storage has been falling rapidly, and now stands at about $US3,700. And it just so happens that the power required to operate an electric vehicle can power the average home for up to three days, Barclays notes, “potentially opening a new use in residential distributed generation systems.” Battery costs could come down even further if Elon Musk’s gigafactory launches, they add. Yesterday w

e discussed this idea in detail. Here’s the price decline chart:

Barclays battery costs

Barclays sees the solar + storage wave has the potential to spread beyond its roots in California and Arizona. Here is their timeline for when solar costs could reach parity in all 50 states:

Barclays solar

Cheap solar panels, combined with cheap storage, will spark a grid “defection spiral” that will pry away utilities’ grip on the power monopoly. In this scenario, early adopters begin leaving the grid, incrementally increasingutilities’ power costs rise — which further exacerbate the shift into solar and storage, and so on.We are already seeing evidence of step 1, as utilities have begun complaining that solar customers are causing electricity prices for non-solar users to go up.

This is maybe the most vivid description in the note of what solar will do to utilities:

we envision an electricity market where demand for grid power falls, peak hours shift (perhaps dramatically), and regulatory mechanisms need to be adjusted or overhauled to accommodate some utilities becoming the electricity generators of last resort. We expect the net effect to be higher grid power costs (thereby exacerbating the consumer shift to solar + storage), lower average credit quality for regulated utilities and unregulated power producers, and increased recognition of the long-term threat to grid power.

Whatever roadblocks utilities try to toss up — and there’s already been plenty of tossing in the states most vulnerable to solar, further evidence of the pressures they’re facing — it’s already too late, Barclays says:

We fully expect utilities and regulators to make a good faith effort to preserve the status quo “regulatory compact,” whereby the monopoly utility provides a safe and reliable service and regulators allow it to earn a reasonable low-risk return. However, we also expect them to be playing a constant game of catch-up as solar develops. The costs of solar and storage technologies are falling quickly and may fall even faster as higher demand builds additional scale. But the cost of distribution grids and thermally generated power are more likely to rise than to fall, in our view. As a result, regulators and utilities will be constantly trying to respond to a moving target, which is precisely the environment where slow-moving incumbents can fall behind.

It’s been a good run.


A new digital ecology is evolving, and humans are left behind

A new digital ecology is evolving, and humans are being left behind

12SExpand by George Dvorsky.

Incomprehensible computer behaviors have evolved out of high-frequency stock trading, and humans aren’t sure why. Eventually, it could start affecting high-tech warfare, too. We spoke with a researcher at University of Miami who thinks humans will be outpaced by a new “machine ecology.”

For all intents and purposes, this genesis of this new world began in 2006 with the introduction of legislation which made high frequency stock trading a viable option. This form of rapid-fire trading involves algorithms, or bots, that can make decisions on the order of milliseconds (ms). By contrast, it takes a human at least one full second to both recognize and react to potential danger. Consequently, humans are progressively being left out of the trading loop.

And indeed, it’s a realm that’s rapidly expanding. For example, a new dedicated transatlantic cable is being built between US and UK traders that could boost transaction speed by another 5 ms. In addition, the new purpose-built chip iX-eCute is being launched which can prepare trades in an astounding 740 nanoseconds.

Virtual predators and prey

The problem, however, is that this new digital environment features agents that are not only making decisions faster than we can comprehend, they are also making decisions in a way that defies traditional theories of finance. In other words, it has taken on the form of a machine ecology — one that includes virtual predators and prey.

A new digital ecology is evolving, and humans are being left behind

3SExpandConsequently, computer scientists are taking an ecological perspective by looking at the new environment in terms of a competitive population of adaptive trading agents.

“Even though each trading algorithm/robot is out to gain a profit at the expense of any other, and hence act as a predator, any algorithm which is trading has a market impact and hence can become noticeable to other algorithms,” said Neil Johnson, a professor of physics at the College of Arts and Sciences at the University of Miami (UM) and lead author of the new study. “So although they are all predators, some can then become the prey of other algorithms depending on the conditions. Just like animal predators can also fall prey to each other.”

When there’s a normal combination of prey and predators, he says, everything is in balance. But once predators are introduced that are too fast, they create extreme events.

“What we see with the new ultrafast computer algorithms is predatory trading,” he says. “In this case, the predator acts before the prey even knows it’s there.”

Johnson describes this new ecology as one consisting of mobs of ultrafast bots that frequently overwhelm the system. When events last less than a second, the financial world transitions to a new one inhabited by packs of aggressively trading algorithms.

Now, while we’ve known about high-frequency stock trading for years now, what’s less known is the frequency of ultrafast extreme events (UEEs). In the context of stock trading, a UEE (sometimes referred to as a flash freeze) manifests as a crash or spike. That is, an event in which a stock price ticks down or up at least ten consecutive times before changing direction, and the price change exceeds 0.8% of the initial price.

A new digital ecology is evolving, and humans are being left behind

SExpandAccording to Johnson’s research, there were a jaw-dropping 18,520 crashes and spikes with durations less than 1,500 ms between January 2006 and February 2011.

And disturbingly, as the duration of these UEEs fell below human response times, the number of crashes and spikes increased dramatically. What’s more, these crashes could not be attributed to other factors.

Mobs of Ultrafast Bots Vying For Control

Crashes are problematic on their own, but they’re doubly so when we’re left out of the loop. Without human oversight, these algorithms battle it out to take control.

A new digital ecology is evolving, and humans are being left behind

4SExpand “It simply is faster than human predators (i.e. human traders) and the humans are inactive on that fast timescale,” says Johnson. “So the only active traders at subsecond timescales are all robots. So they compete against each other, and their collective actions define the movements in the market.”

In other words, they control the market movements. “Humans become inert and ineffective,” he says, “What we found, which is so surprising, is that the transition to the new ultrafast robotic ecology is so abrupt and strong.”

There’s also the global economy to consider. UEEs can result in market instability, highlighting the need to develop countermeasures — realtime interventions that could mitigate risk.

“There is real money being gained and lost here — even a few thousand dollars every millisecond, which is a tiny amount on the market, is a million dollars per second,” he told us. “This money could be pension fund money, and so on. So somebody needs to understand what is going on, and if it is ‘fair’.”

If we don’t understand what’s going on, he argues, it will be impossible to make informed decisions as to whether regulation is required, and if so then what type.

“In terms of risk to the financial system as a whole, the fact that our paper reports heightened numbers of extreme events within financial stock prior to the 2008 financial crash, strongly suggests that there is indeed a connection between what goes on in this subsecond world, and the usual financial market timescales of days and weeks,” he says. “Most of us have our pensions buried in the markets, so in addition to pure science interest, this issue is of direct relevance to everyone.”

The Need For Speed

We asked Johnson why we can’t just slow down the process and legislate human-comprehensible timescales.

“Well, there will always be companies trying to go faster,” he responded. “If there was a minimum time for the transaction, then it would be like holding people at the door on the first day of a sale, then opening the doors to a stampede. The Europeans are trying to apply a so-called Tobin tax on transactions. But lawyers today have said this may be illegal, so it is a complex situation concerning what to do — in part because no one fully understands what is actually going on. We have only scraped the surface in our paper, but we are confident there are many strange effects waiting to be discovered in the subsecond trading world.”

Indeed, Johnson worries that this new all-machine phase could extend to the world of cyber-attacks and cyber-warfare.

Given the presence of rival agents, bots could be set against each other in a similar manner, again creating a digital ecology outside of our control and comprehension. But unlike a market crash, a UEE in this context could result in something quite catastrophic, including the collapse of the IT or energy infrastructure. It could even give rise to an advanced artificial intelligence.

Another possible solution, albeit a more radical one, is that we engage in human intelligence amplification. By augmenting human intelligence, we may be able to stay in the loop. Modified humans could enter into this new digital jungle and take part as a generalized intelligence in an effort to regain control. Trouble is, IA could result in severe downstream psychological consequences, such as insanity and psychopathy. Alternately, we could develop a specialized AI or AGI (artificial general intelligence) to perform this work on our behalf.

Towards a New Scientific Theory of Subsecond Decision-Making

More simply, Johnson says we need to better understand the collective behavior of these interacting systems if we’re to avoid problems like microcrashes. Surprisingly, he says this may not be as difficult a proposition as it sounds.

“Algorithms are ultimately deterministic, even though they might try to act ‘randomly’ to avoid detection,” he explains. “So while humans have ‘free will’ and can have all sorts of strange and random behaviors, algorithms can be doing simple things like predicting that the next movement will be a continuation of the previous few timesteps.”

The trouble, he says, is that there may be many robots that behave similarly, since this particular algorithm is an obvious but simple one, which means that crowds/mobs of these algorithms spontaneously form and act.

What’s more, if cyber arms races are to continue apace, the sophistication of these bots will only increase. As time passes, the complexity of these new ecologies may prohibit the kind of safety measures Johnson calls for.

But looking ahead, Johnson is optimistic about our chances.

“We feel we have already made an advance in our present paper,” he told io9. “Though it is not yet the full story, our paper sets the scene for a systematic effort to unravel the various processes at play.

According to Johnson’s paper, which now appears in Nature Scientific Reports, every UEE is the result of a sudden excess buy or sell demand in the market. Their new model provides a simple explanation for how these sudden excess buy or sell demands are generated (but not how they get fulfilled).

“Even with the results we have so far, in particular our mathematical ecological model, we can check ‘what if’ scenarios about the future, and also attempt to reverse engineer what is going on when one of these ultrafast ‘accidents’ happen.”

Read the entire study in Nature Scientific Reports: “Abrupt rise of new machine ecology beyond human response time.”

Images: Tron: Uprising; Shutterstock/imredesiuk; The Matrix.

Citizens strike back: Tiny, low-cost drones may one day assassinate corrupt politicians, corporate CEOs and street criminals


by Mike Adams, the Health Ranger
(NaturalNews) This is an important analysis article on what I believe will be a coming wave of “Kamikaze assassination micro drones” which will soon be affordable enough for everyday citizens to deploy against selected targets. Why is this discussion important? Because these micro drones have the very real potential to re-shape the distribution of power across our planet… and they may pose a real danger to public safety and security across society.

(As you read this article, please bear in mind that I do not in any way condone the tactical applications described herein. This article is a WARNING, not an endorsement, of this very dangerous convergence of trending technologies which may threaten us all.)

Tiny assassination drones must be understood as a revolutionary new kind of weapon, and there is firm historical precedent for dramatic sociopolitical shifts rising out of such revolutions.

For example, the invention of the gunpowder-based rifle radically decentralized military power, making firepower affordable and available to the masses. This caused a global wave of popular revolutions that ultimately lead to modern-day representative government, where those in power were suddenly forced to listen to the needs of their armed citizens. (Before the invention of gunpowder, kings simply deployed heavily-armored knights against citizens, forcing the peons into obedience thanks to a vastly superior weapons and defense system that was completely out of reach of the masses.)

Today we have large-scale militarized “drones” — unmanned aerial vehicles or UAV’s — enjoying widespread deployment by the Pentagon, which plans to spend $2.5 billion next year on these drones (1). These UAVs conduct mission reconnaissance, target acquisition and weapons delivery all on the same platform. For now, they represent a battlefield tactical edge for the United States of America, but that advantage is likely to be short-lived for reasons discussed here.

Drone miniaturization, facial recognition systems and kinetic kamikaze missions

From studying trends in drone development, both in terms of software and hardware, I am now predicting the development of facial-recognition “kamikaze micro drones” capable of carrying out targeted human assassination missions with remarkable precision and reliability. The four trends that will lead to this are:

1) Drone miniaturization: The development of mass-produced, affordable “micro drones” about the size of a common bird. These will likely be produced as hobby aircraft which will be easily modified to take on a more aggressive role.

2) Facial recognition systems: The miniaturization of facial recognition software / hardware systems which may be deployed on micro drones and powered by very small on-board power supplies.

3) Rapid advances in drone manufacturing efficiency, resulting in greater affordability of drone platforms by smaller and smaller groups, including corporations, smaller nations, universities, vigilantes and even activist groups.

4) Incremental improvements in the power density of on-board batteries, allowing greater flight time and more CPU-intensive on-board computations.

These four trends will ultimately result in the creation of “Kamikaze assassination micro drones” with the ability to search for, identify and terminate a specific human target. It is likely, in fact, that many governments of the world are already working on this technology.

This technology will reshape the meaning of “war by allowing rogue nations like North Korea, for example, to simply ship tens of thousands of such drones into the USA via China, marked as “toys” on import manifests. Once in the USA, these micro assassination drones can be dropped from low-flying airplanes or released from vehicles in city parks to carry out their pre-programmed missions of targeted assassinations across U.S. cities.

Future Air Force battles may be carried out by palm-sized aircraft

The United States Air Force already appears to be developing such devices, by the way. As journalist Susanne Posel writes at (2)

Under the Air Vehicles Directorate branch of the US Air Force, research is being conducted to perfect remote-controlled micro air vehicles (MAVs) that are expected to “become a vital element in the ever-changing war-fighting environment and will help ensure success on the battlefield of the future.”

See this promotional video about MAVs under development right now:

How Kamikaze micro drones will work

Kamikaze micro drones do not need to carry conventional weapons or explosives of any kind. Instead, they may simply carry an on-board serrated puncture weapon such as a crossbow hunting broad tip, affixed to the end of a shaft in a spear arrangement.

As shown in the image on the right, these devices are commonly available right now on, where they are called “Killzone broadheads” and boast the following marketing claims:

* The new Killzone Crossbow is a 2 blade rear-deploying broadhead that packs a devastating 2″ cutting diameter
* 2″ cutting diameter for devastating wound channels & excellent penetration
* Heavy-duty, Razor-sharp .039″ blades

These crossbow hunting tips can also be purchased with cash at any sporting goods store.

Next, the Kamikaze drone’s on-board operating system is loaded with the facial imagery of the intended target, then released in an area the target is known to frequent (such as near their home, a restaurant, or their place of employment).

The micro drone expends energy to fly to a “perch” location from which it can conduct covert facial recognition surveillance without being spotted and without expending the enormous amount of energy needed to hover in place. From this perch location, the drone will observe faces passing by, comparing them to its intended target.

Once the micro drone spots the intended target, it can either “dial home” and transmit a picture of the target to a remote operator for a human kill decision, or it can be programmed to make that decision autonomously based on a threshold of certainty in the facial recognition match.

Once the kill decision has been made, the micro drone deploys its serrated spear and launches itself toward the target at high speed, aiming to thrust the spear into the neck of the subject. A two-inch-wide cutting pattern almost guarantees the blades will slice through an artery or possibly even sever the spinal column. Although the micro drone’s mass seems quite small, the human neck is especially vulnerable and can be easily penetrated by a serrated short spear carried with the momentum of a small object flying at high speed.

Once the attack is complete, the drone is simply abandoned, having completed its job. It can be pre-programmed to wide its own memory, erasing any traces of its programming code or flight history.

What if anyone could kill almost anyone else for about five thousand dollars?

In time, such drones could be purchased or built for less than a thousand dollars each. With an estimated mission success rate of 20%, that means the out-of-pocket cost to successfully kill someone with one of these drones might only be $5,000.

Before I explain why this matters, let me be clear that I am wholly against the use of violence to achieve commercial or political gain, and in no way do I condone the use of Kamikaze drones as described here. In fact, this article should serve as a warning to what’s coming in the hopes that we might achieve some globally-observed limits on drone deployment.

But until that happens, here’s where this is headed: At $5,000 per assassination, there is a very long list of corporations, politicians, activists and individuals who would be willing to deploy these drones to assassinate all kinds of targets: members of Congress, corporate rivals, political enemies, competing drug dealers, ex-wives or ex-husbands… and the list goes on.

These kamikaze micro drones could even be used as weapons of war. Imagine Iran or North Korea, for example, deploying thousands of such devices around Washington D.C. with the sole purpose of killing as many U.S. Senators and members of Congress as possible. Tactically, that’s a very low-cost war with a very high “return” in terms of “enemy casualties” from the point of view of the attacker.

But individuals and vigilantes could also use the technology for their own purposes at a local level. Ponder for a moment what happens when anyone with a mere $5,000 and a few photos of their intended target can simply release a small drone out of a backpack and set back while that micro drone locates and assassinates their intended target (using commonly available killing weapons, no less). The ease of operations is shockingly low, making such solutions readily available to anyone willing to surf the ‘net and download the operating system that carries out such activities. (Source code will no doubt be posted on many hacktivism sites.)

It’s not difficult to imagine local neighborhood watch groups pooling their funds and deploying drones to kill local drug dealers who terrorize the streets, for example. Even vigilantes who seek to protect their fellow citizens might see themselves as some sort of “drone superheroes” who deploy kamikaze drones to take out local crime bosses or dirty politicians who violate the law.

Everyday citizens would have the power to assassinate Presidents

What we are really looking at here — and again I must repeat and urge that IN NO WAY DO I CONDONE OR ENCOURAGE SUCH ACTS OF VIOLENCE — is the rise of a decentralized, affordable technology which could someday allow ordinary citizens to quite literally assassinate Presidents.

Which Presidents? Any that you can imagine, of course: Presidents of nations, Presidents of corporations, Presidents of universities and so on. It is very difficult to imagine how highly-visible people could be protected against such attacks based on present-day defensive tactics and weaponry. Handguns and rifles, for example, would be very hard-pressed to shoot down a fast-moving micro drone making a kamikaze attack.

The U.S. Secret Service, a group of incredibly well-trained and highly-dedicated individuals, probably has never faced a micro drone attack and very likely has no training for how to deal with such an attack. Clearly this is going to have to change in the very near future as such drones come within reach of everyday people. Every high-ranking member of every government around the world, in fact, is going to need to start thinking about how to be safe out in the open once these micro drones become a reality. (I have developed some detailed ideas on defensive tactics against such attempts, if anybody from the U.S. Secret Service is interested…)

The bottom line on this is that anyone who appears out in the open — giving a speech, taking a walk in the park, or pursuing a campaign trail — could be easily assassinated with one or more such Kamikaze micro drones. No one is immune from such attacks.

Another key “advantage” of this weapon system — from the point of view of the attacker — is that the attack is virtually untraceable. The person who launches the attack could be miles away by the time the drone actually strikes, and there’s no trail of gun registrations, ammo purchases or explosives to track down. In fact, the drone could be programmed to wipe its own memory clean after the attack is carried out, erasing any on-board evidence of the executable code, target images or operating system. The only evidence left behind would be the hardware platform of the drone itself, which is likely to be based on a readily available “hobby” drone chassis that’s impossible to link to any specific individual.

As you can see, this would create real nightmares for law enforcement investigators. And in a society that we all would like to see remain peaceful and safe, the idea that some individuals could operate deadly assassination drones with near-impunity should be downright alarming. Because many people would use this technology with some highly destructive intent.

A tremendous threat to law enforcement

As Natural News readers already know, I have worked closely alongside law enforcement in the past, engaging in fundraising, defensive martial arts training and more. One of my greatest fears with this kamikaze micro drone weapons platform is that it could easily be used by even a poorly-financed drug gang to eliminate local law enforcement personnel en masse, right before a major drug run activity takes place.

A small air force of such drones — say, 100 drones at just $1,000 each — could swarm a small town and kill any member of law enforcement spotted in public. That’s a mere $100,000 investment for a drug gang that might be making a multi-million-dollar smuggling run through a small urban chokepoint.

Similarly, an activist group committed to acts of violence could quite literally launch a war on the CEOs or employees of any targeted corporation. If some group didn’t like an oil company, or a factory farming operations, or even a weapons manufacturer, it would quite easily purchase and launch a swarm of micro-drones to kill employees as they walk through the company’s parking lot each day, for example. It doesn’t take very many casualties of key corporate scientists to derail R&D programs.

In all, the potential for a “micro drone Wild West” is very real and very concerning. And here’s why it could be even more wild than you might imagine…

Mass chaos because there’s no personal risk

The availability of low cost but highly effective kamikaze micro drones could unleash real chaos across society for a reason you may not have anticipated: the attackers do not put themselves at risk.

Allow me to explain: In a town where everybody carries a loaded gun, you have the widespread available of weapons, but each person puts their own life at risk by deploying any such weapon. That’s why an armed society “is a polite society,” as they say. Guns are everywhere, but nobody wants to die in a gunfight, so the guns stay in their holsters. In summary, you can’t deploy the weapon without the risk of getting killed in the process.

But kamikaze micro drones take the risk of personal harm out of the equation. The weapon is no longer attached to the person. They are physically far apart. Now you have cheap killing machines with zero personal risk of harm on the part of the attacker. If the drone gets destroyed, they’ve only lost whatever money it costs to replace it. Even if the drone gets captured, it’s not easy to link back to the attacker, so personal risk is minimized.

So with micro drones, we have a society where everybody can have a deadly assassination weapon without the risk that would traditionally accompany an attempted assassination. In effect, we now have “anonymous assassination weapons,” and as we’ve seen in online gaming, the results of anonymous actions are often disastrous: when their own real life isn’t at stake, people will behave in erratic, power-hungry ways that would never be pursued if their own lives were at risk. And because the micro drone does the killing for them, “killers” no longer have to do any killing themselves. They don’t even need to know how to use a knife, or a gun or explosives. All they need is to buy a micro drone, download the kamikaze software, load up a couple of pictures of their target, and let it loose on the sidewalk.

That makes killing frighteningly easy, affordable and accessible to the masses. For obvious reasons, this is not something we would ever want to see in a civilized society.

Drone anarchy?

In the minds of some people, this might in some ways be argued as a good thing. In a world where power is increasingly centralized in the hands of the few, the ability to easily acquire and deploy affordable, targeted killing machines might be called by some a “leveling of the playing field of power.”

Yet I would urge a careful review of all the implications of such technology before reaching any firm conclusions. The widespread availability of anonymous, autonomous killing machines should be treated with extreme caution. Because in a world where autonomous killing machines are readily available and affordable, those who already sit in positions of centralized power would also have access to these machines in very large numbers.

Anyone the authorities wanted to eliminate could simply have their face images fed into a network of micro drones deployed across any given city. A few hours later, they’re all dead, and the city didn’t even have to involve human police officers or court judges. The drone killings of citizens might even be sanctioned by the courts as a sort of “affordable justice” in a society increasingly burdened by runaway debt and bankruptcies.

Remember: President Obama has already built the “legal” framework for the drone killings of U.S. citizens on U.S. soil. Now it’s only a question of the technology catching up with the lawlessness that has already been embraced by the government itself (where due process is now considered ancient history).

When considering the implications of these drones, it’s important to look at all the various parties that might be tempted to use them (and for what purpose). It’s not difficult to imagine all the following groups wanting to deploy assassination drones: corporations, vigilantes, drug gangs, the military, the CIA, local law enforcement, federal law enforcement, terrorist groups, nation state enemies of America, anarchists and possibly even entertainment junkies who would stage drone killings just to post the “drone snuff films” on the ‘net.

How to hide from drones

All this means more and more people will someday need to hide their faces if they wish to venture out into the open world. This may soon include important political figures, celebrities, corporate leaders and almost anyone with a publicly-recognizable face.

A number of strategies are already being explored for this purpose. For example, artist Adam Harvey is currently working on the CV Dazzle project which explores face paint camouflage patterns that confuse facial recognition systems:

Here’s another face camouflage strategy that uses hair design and makeup to deter facial recognition systems:

See more patterns at

Another inventor has also developed a printable face mask that he calls a Personal Surveillance Identity Prosthetic.

His company is Urme Surveillance, and he also has an Indiegogo campaign to raise funds for the project.

As the Urme Surveillance website explains, “Our world is becoming increasingly surveilled. For example, Chicago has over 25,000 cameras networked to a single facial recognition hub. We don’t believe you should be tracked just because you want to walk outside and you shouldn’t have to hide either. Instead, use one of our products to present an alternative identity when in public.”

With the rise of kamikaze micro drones, protecting your identity in public may be more than a privacy tactic… it may mean the difference between living and dying.

Sources for this article include:


Humanity Now Officially Ready For Suspended Animation

Humanity Is Now Officially Ready For Suspended Animation

Surgeons from the UPMC Presbyterian Hospital in Pittsburgh, Pennsylvania, are set to begin suspended animation trials by dramatically cooling down trauma victims in an effort to keep them alive during critical operations.

Twenty years ago, Peter Safar and Ron Bellamy proposed that the rapid induction of hypothermia could “buy time” for a trauma surgical team to control bleeding. Now, thanks to the work of Peter Rhee and Samuel Tisherman, this idea is officially ready for prime time.


“We are suspending life, but we don’t like to call it suspended animation because it sounds like science fiction,” noted Tisherman in a New Scientist article. “So we call it emergency preservation and resuscitation.” The idea is to buy patients precious time during critical operations, such as after a massive heart attack, stabbings, or shootings.

The technique will be used on 10 patients who would otherwise be expected to die from their injuries. The doctors on the project will be paged when a candidate patient arrives at the hospital; there’s usually one case like this every month, typically with survival rates less than 7%.

It’s part a feasibility and safety study, called the Emergency Preservation and Resuscitation for Cardiac Arrest from Trauma (EPR-CAT).

Because patients cannot give informed consent, the study will be conducted under the exception-from-informed consent process, which includes community consultation and public notification. So, if you live in the Pittsburgh area, and this seems too risky for you, you have to opt out (which you can do here).

How It Works

This technique involves internal rather than external cooling. A team of surgeons will remove all of the patient’s blood, replacing it with a cold saline solution; the cold fluid is administered through a large tube, called a cannula, which is placed into the aorta, the largest artery in the body. This will slow down the body’s metabolic functions, significantly reducing its need for oxygen. Then, a heart-lung bypass machine will be used to restore blood circulation and oxygenation as part of the resuscitation process. A state of profound hypothermia will be induced, at about 50ºF (10ºC), to provide a “prolonged period of cardiac arrest” after extensive bleeding. In other words, clinical death.

The technique, which was developed by Peter Rhee, was successfully tested on pigs back in 2000 (his resulting study can be found here). Writing in C|Net, Michelle Starr explains more:

After inducing fatal wounds in the pigs by cutting their arteries with scalpels, the team replaced the pigs’ blood with saline, which lowered their body temperature to 10 degrees Celsius.

All of the control pigs, whose body temperature was left alone, died. The pigs who were resuscitated at a medium speed demonstrated a 90 percent survival rate, although some of their hearts had to be given a jump start. Afterwards, the pigs demonstrated no physical or cognitive impairment.

The technique, therefore, will only be used as an emergency measure on patients who have suffered cardiac arrest after severe traumatic injury, with their chest cavity open and having lost at least half their blood already — injuries that see only a seven percent survival rate. The survival rate of these patients will then be measured against a control group that has not received the treatment before further testing can begin.

The human body can only be placed in this state for a few hours, so we’re still quite a ways off from the suspended animation typically featured in scifi. But if this technique is any indication, we may get there just yet.


The Coming Retail Evironment: Print Makeup Illustrates How It Works

 Imagine being able to make your own gorgeous, high-quality makeup at home, using any colors you choose.

That’s the future envisioned by Grace Choi, who made a huge splash this week when she presented a product that can 3-D print makeup at TechCrunch Disrupt.

Choi has created a prototype for a printer called “Mink” that will let users choose any color imaginable and then print out makeup in that exact same hue (at this point, she’s only done demonstrations with blush). By allowing people to skip the expensive department store prices to make the perfectly colored products themselves, Mink could completely revolutionize the makeup industry.

TechCrunch reporter Colleen Taylor asked Choi some more questions about Mink after her ground-breaking presentation to get a better idea of how the product will work.

Here’s what the final Mink could look like.

Although the prototype is currently the size of the average at-home printer, Choi says that the final product will be about the size of a Mac Mini and will sell for about $300, at least at first. There are two key features to this printer: It uses a cosmetic-grade dye that’s FDA-compliant, and, instead of printing on paper, it will print its colors onto a powder substrate that is like the raw material of regular makeup.

“It comes from the same sources as those products that you see on store shelves,” Choi told Taylor.

Choi’s product would let users find a color online, use a tool to find that hue’s hexadecimal number, and then print it. Every color has a unique hex number so you could literally print out any color.

She says that Mink will be targeted toward 13- to 21-year-old girls who are still experimenting with their makeup habits. That period is also when girls build confidence, she says, explaining that when she was growing up, she would sometimes have trouble finding beauty products suited for her skin.

When stores didn’t have any products targeted toward Asian women, she says she felt alienated. Similarly, when stores didn’t carry the more-exotic colors she was looking for, it made her feel like her ideas were abnormal.

“If they didn’t have a green- or black-colored lipstick that made me think that there was something weird about the way I was thinking,” Choi said. That thinking damaged her self-esteem, and she says that she stopped speaking up in class because she started believing that all her ideas were strange.

She wants her products to encourage young women to test out lots of different looks and have complete freedom over the colors that they choose, without ever feeling like their ideas were weird.

She says that the reaction to her product so far has been completely amazing.

“I’m really grateful. I’m really overwhelmed. I’m really excited about all of this,” Choi says. After a pause, “I’m speechless.”


Screenshots / TechCrunch

Editor’s Note: Expect many new and exciting products to come to market as 3-D printing takes off. This will revolutionize the retail industry over time as many OTC products can be replaced with a 3-D printer and their basic ingredients.

The Future of Smart Surgery Technology

In what promises to be one the most exciting new tools in the battle against cancer, surgeons now have an entirely option to diagnose this deadly disease and it fits right in the palm of their hand.

A brand new experimental surgical knife can help surgeons ensure that they have, in fact, removed cancerous tissue. As surgeons use their knives to cut, it typically heats up the tissue to allow the cutting process to be easier. This produces a type of smoke with a very distinct, sharp smell.

The knife can analyze the smoke itself and can sense whether the tissue being cut is cancerous tissue or healthy tissue.

The invention is the creation of Dr Zoltan Takats who is based at the Imperial College London. He initially had a feeling that the smoke produced during cancer surgery would be able to yield important clues during the process itself.

While the surgical knife is of typical size, it connects to a mass spectrometry device that is the size of a refrigerator. The wheeled device is able to analyze the smoke from cauterizing tissue without it having to be sent to a lab. This will yield instant results and it will not only have the potential to shorten surgeries themselves but also save a significant amount of funds because the surgery won’t have to be reattempted if surgeons didn’t make the right cut in the right area.

The mass spectrometry device is able to analyze the smoke “signatures” from various types of tissue and then display it on a monitor. A “green” reading indicates that the tissue is healthy. A “red” reading shows that the tissue is cancerous and the “yellow” reading indicates tissue that is unidentifiable.

Not only will this process significantly lower the chances of repeat surgery being needed, it also has the potential to save patients from the chemotherapy or radiation processes.

The knife and the machine combined are available at a price point of $380,000 even though the product has not been effectively commercialized. The overall cost is expected to go down once more hospitals and surgery centers begin placing orders.

With the most common method of removing tumors being surgery, this new development represents a potential “ace up the sleeve” of cancer surgeons everywhere. The knife was tested over a two-year period and involved samples taken from 302 patients who represented a wide range of different cancers such as brain cancer, colon cancer, breast, liver and lung cancer were also included in the data set.

The surgical knife was used to analyze the tumors in 91 of the patients and the smart knife was able to spot cancer in 91 of 91 instances.

The results themselves were published in the respected journal Science Translational Medicine. Both the Imperial College London and the Hungarian government were able to contribute funds to the development of this product and support to the researchers working on the projectAnd mayhap a major chunk of the work relating to the research is encompassed by administering the various sectors of the research. If you’re a research team looking for someone to expedite your proceedings and are running short on time, then you could learn more about PharmaSeek.

The product was first made available to the public in a demonstration in London, England in mid-July when the doctors involved in the research demonstrated the knife by slicing into slabs of pig’s liver. The distinctive smell of burning flesh quickly filled the room and researchers were able to demonstrate how the device works by slicing into different test slabs

The product is still yet to be submitted for formal regulatory approval but there are still a number of studies being planned to both perfect the device and possibly even find new uses. The area where many medical experts believe the device will be most effective is when conducting brain surgery as various types of brain cancers are well-known for being able to infiltrate brain tissue in areas that aren’t visible to the surgeon.

Obviously, brain surgery is an extremely intricate process with little margin for error and anything that eliminates guess-work on the part of the surgeons is going to make the process easier and more effective.

Knowing that cancerous tissue has been removed before the patient is stitched up has the potential to save lives and save money for our healthcare system.

Further studies will be focused on the demonstrated effectiveness of the device for patients themselves in regards to cutting down on surgery times as well as blood loss and post-surgery infection rates. Many observers have described the product as a fascinating opportunity to save lives, save money and make cancer even easier to detect and destroy.

To your health!


Robots And Software Eating Jobs? Let Them, You Can Create Your Own

LinkedIn (Photo credit: Wikipedia)

New technologies are eating jobs. Big deal, you might say. After all, the steam engine, cotton gin, sewing machine, and automobile all eliminated jobs. The fact is that new technologies have long created many more new jobs than they have eliminated.

But today is different. In the past, innovation advanced slowly enough that people had time to recognize and adapt to new opportunities before many of the old jobs disappeared. Today, innovation is advancing so quickly that jobs are being destroyed and new opportunities are being created faster than many people can recognize them or adapt to them. Today, we need to recognize those opportunities and adapt to them ever more quickly. The good news is that anyone can do this, and best of all, anyone can create his or her own job. Including you. In this article, we’ll see how.

Jobs Are Delicious Meat
Three noted scholars and friends of mine have written on technology eating jobs:

· In his seminal 2010 The McKinsey Quarterly article, “The Second Economy,” Santa Fe Institute’s Brian Arthur predicts that in about two decades, a “second economy” of software, servers, and sensors will rival the size of the human economy, in value added if not in revenue. This autonomous economy is already automating formerly human tasks, such as airline passenger management (reservations, check-in, security, baggage, and billing) and international shipping (registration, tracking, and forwarding).

· In Race Against the Machine (2011), MIT’s Erik Brynjolfsson observes that software and automation are eating away at low- to mid-level desk jobs like accounting and customer service, a trend that will eventually extend to high-skilled professions like medicine and engineering, on the one hand, and trades like hairdressing and plumbing, on the other. Google driverless cars will replace human drivers, and IBM Watson-like technology with sensors will replace physicians’ medical diagnoses.

· Most recently, in Average Is Over (2013), George Mason University’s Tyler Cowen writes that the above trends will lead to stagnant or falling wages for much of the United States. Future employment will require skills to collaborate with and complement machines to avoid competing with and being replaced by them.

The Oxford Martin School concurs, concluding that 45% of American jobs are at high risk of being taken by computers within the next two decades.[1] Most vulnerable are jobs in transportation/logistics, production labor, and administrative support; next are services, sales, and construction; last will be management, science and engineering, and the arts.

Reports of Employment’s Death Have Been Greatly Exaggerated
In the early 19th century, the automated loom, famously protested by the Luddites, took jobs away from weavers. Later, electricity and the light bulb took away jobs from wood-burning stove and candle makers. The automobile took away jobs from buggy makers. Digital computers and switches took jobs away from their human counterparts. But in each case, new technologies provided many more jobs than they eliminated, in two ways. The first, more modest way was through the development, manufacture and maintenance of the new technologies, be they looms or light bulbs. The second, more significant way was through the leveraging and combining of the technologies in new, often unexpected applications and business arrangements that could not have existed without the technologies. The cotton gin and automated loom enabled large-scale production of soft, comfortable clothing, making it affordable for millions of people for the first time. Steam engines and railroads enabled goods to be shipped to distant markets, which in turn made Sears & Roebuck mail order catalogs and later department stores possible. Electricity enabled the global power grid and electrical appliances. Digital computers and switches enabled IT, telecommunications, software, the Internet, and mobile applications.

We can see easily when jobs disappear, but creating jobs takes work: it means recognizing, exploring, and adapting to needs and opportunities. As I discussed in my last column ,[2] every new product or service (i.e., solution) not only satisfies a need, but also creates new needs in three ways:

1. The new solutions themselves can be improved upon (e.g., shoes can be made more comfortable; laptops and smart phones can be made smaller, lighter, and more powerful; software can be made faster, easier to use, and more reliable)

2. The providers of those new solutions have needs (e.g., sales, marketing, accounting, software, equipment, customer and competitive intelligence, food and cleaning services)

3. New solutions create new needs around them (e.g., mobile phones need holsters; cars need navigation, keyless entry, and camera systems; video games need virtual money; electric vehicles need re-charging stations).

In his modern classic, The Origin of Wealth,[3] Eric Beinhocker estimated the number of individually coded products[4] available to New York City residents in 2006 to be on the order of tens of billions. With this mushrooming range of products and ever-faster pace of innovation, needs and opportunities are coming at us faster than we can recognize or adapt to them.

To become or stay employed in this environment, we’ll first see how to land an existing job (one someone else has created); then, how to create your own job.

Landing an Existing Job
Rather than recount job search techniques here – leverage LinkedIn, research companies you are interested in, network, adopt good grooming habits – let’s see how to make innovation work for, rather than against you in landing a job:

1. Master the very technologies that are eating jobs. Someone has to design, implement, test, build, maintain, market, sell, and apply that software and automation. That someone could be you. MIT/Harvard edX, Coursera, Udacity and CodeCademy offer free Massive Open Online Courses (MOOCs) in programming, AI, machine learning, and databases.

If you are new to IT, HTML and Javascript are a good place to start.[5] Knowing these languages will let you create and maintain simple web sites, help you market yourself online and find full-time or part-time work, even if your career plans are outside of software development. CodeCademy has free courses.

To start a career in software development, consider Python. It’s interactive, exposes and introduces you to essential programming concepts, and easily integrates into existing web services, enabling you to leverage others’ work. Job opportunities abound: see .

Next, consider developing a web service to demonstrate your skills or to offer to others. See for examples. This will require using a server-side language, likely Python, PHP, Ruby on Rails, or if you are more ambitious, Java or C++, then deploying it on one of the cloud computing ecosystems such as Amazon Web Services or Google Cloud Services. Yet another path is creating a mobile app for iPhone or Android and connecting to your own or others’ web services.

Veteran software developer Ervan Darnell, who has worked for both Facebook and Google, reiterates that free tools and courses are available for all of the above, and further notes that the software industry weighs talent more heavily than titles or university degrees. That’s good news. Titles and degrees require entrance qualification exams and tens of thousands of dollars for tuition and expenses. In contrast, MOOCs are free and open to everyone. Increasingly, all you need to get a quality education is initiative, self-discipline, and hard work.

2. Become an early adopter of new technologies and apply them in your work. With the accelerating pace of technology, adopting new technology even slightly ahead of the mainstream of your field will give you more and more of an advantage in productivity and competitiveness. If staying one year ahead gave professionals a 10% advantage in 1992, doing so might give them a 20% advantage in 2014.

This principle applies to all trades and professions. Electricians can use new meters and testers that improve their efficiency and accuracy, and learn to install and maintain computer networks in addition to wiring and components. Plumbers can apply technologies such as SeeSnake, a video camera for inspection and diagnosis of clogged pipes. Dentists, hairdressers, and auto repair shops can use free online software to enable their clients to self-schedule for appointments. Taxi drivers can use GPS to efficiently combine deliveries with passenger service. Real estate agents can use Google Maps to customize displays of listings for clients. These innovations free up time to make trades people and professionals more productive, allow them to offer higher-quality or differentiated services, or both.

ACA (“Obamacare”) incentivizes employers to convert many jobs from full-time to part-time. Fortunately, new online services empower even those without technology skills to find part-time work, for example, as drivers for Lyft, or running errands with TaskRabbit. Going still further, Amazon Mechanical Turk is enabling those in the world’s poorest developing countries to earn income by performing simple tasks (like responding to surveys or tagging everyday items in photos) from wherever they are and whenever they are able.

3. Choose a career in strong demand. Liberal arts are vitally important, but if you are in college, landing a job after graduation is almost certainly urgent. You have a lifetime to learn about arts and the humanities, but only two to four years to prepare to support yourself. Besides IT and automation, fields generally in demand include bio-tech, nursing, network security, welding, medical technology, and analytics. Find out which are both in greatest demand and most interest you. Find more information on welding at Far more people are studying the arts and the humanities than will find jobs in those fields. If you choose arts or the humanities and find yourself underemployed or unemployed, see 1 or 2 above, and “Create Your Own Job” below.

21-year-old Daniel Trujillo, a student at NCP College of Nursing in Hayward, CA, is learning how Google Glass can provide real-time, mobile, hands-free patient charts and histories bedside. He will be among the first generation of hospital practitioners using wearable IT. By learning leading-edge technology in a highly demanded field, I predict he will easily find a job.

Create Your Own Job
Muhammad Yunus, Nobel Peace Prize winner, micro finance pioneer and founder of the Grameen Bank says:

All human beings are entrepreneurs. When we were in the caves, we were all self-employed…finding our food, feeding ourselves. That’s where human history began. As civilization came, we suppressed it. We became “labor” because they stamped us, “You are labor.” We forgot that we are entrepreneurs.[6]

Anyone who wants to can create his or her own job. Our ancestors – hunters, gatherers, farmers, craftspeople, and traders – knew no other options. If we were all entrepreneurial once, we can still invoke that inner strength today.

Creating your own job lets you do what you are passionate about; lets you make a long-term investment in you, your own business and brand rather than someone else’s; and lets you address opportunities that are unique to you—no one else has your unique combination of skills, knowledge, relationships, and strengths. So why don’t more people create their own jobs today? It is not that they can’t. In some cases, other paths are easier or have shorter-term pay-offs, such as landing an existing job or going on unemployment. In other cases, regulation raises major hurdles to addressing opportunities, as I discussed in a previous column .[7] I don’t promise that creating your own job will be easy. I do promise that it will expand the boundaries of your world, and possibly profoundly enrich your life.

Here is one approach to creating your own job. Choose any product or service in an area you are passionate and knowledgeable about. The area may be aerospace, boats, cars, cooking, education, electronics, fashion, fiction, films, fitness, gadgets, gardening, health, history, math, merchandising, music, politics, scuba, space, sports, statistics, travel, woodworking, you name it. Now think of limitations of the product or service you selected. For example:

· My running shoes don’t tell me how far or fast I have run, nor details of my stride or gait.

· None of the pharmacies in my neighborhood make home deliveries.

· Arthritis can prevent elderly people from using an iPhone or iPad.

· Airline ground crews lack real-time information during boarding about how many and which overhead bins have open space, sometimes requiring that bags be checked when they could be carried aboard and stowed.

If you are passionate about the product or service, you’ll recognize its limitations before others do. Limitations are simply potential needs. If those needs are shared by many others and don’t already have solutions – both of these require research to validate – bingo! – you have identified an unsatisfied customer need. That’s the first step towards creating a job for you.

Next, brainstorm possible solutions, ideally with your potential customers, that you could provide in whole or in part using the resources at your disposal. Acquiring knowledge of new technologies in the field will expand your possibilities. With whom could you team up or partner, if necessary, to enable the solution? Answering those questions is the second step towards creating your job.

Next, can you get a customer to pay you for your solution, even if rudimentary, incomplete, or unpolished, possibly on the understanding that their early payment will enable you to develop and deliver the full product or service to them? That’s the third step. If so – you have created a job! Assume that you won’t get paid for some or much of the time and effort you invest to win this first customer. After you have successfully delivered what you promised and created your first satisfied customer, find other customers you could similarly serve, refine your solution based on what you have learned, and repeat.

My video Unleash Your Inner Company has many more suggestions for creating your own job and starting your own business. Now imagine tens of millions of people throughout the U.S. and the world similarly searching for unsatisfied needs in areas they are passionate about, assessing which needs they are best suited to satisfy in whole or in part, and designing and building products or offering and delivering services that satisfy those needs. Suddenly, tens of millions of jobs are being created. Many of these efforts will take a second, third, or fourth attempt before they are successful. Every attempt increases your likelihood of success; perseverance is a necessary part of success. A small percentage of these businesses will create not just one but many jobs. This bottom-up approach to satisfying needs and creating jobs is scalable, sustainable, and has hugely raised living standards and quality of life over the decades.

So software and robots are eating jobs? Not yours.

[1]Programme on the Impacts of Future Technology . See also )

[2] “As Entrepreneurs Keep Reminding Us, They Lied To Us In Econ. 101 ,” September 10, 2013,

[3]The Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics , Eric D. Beinhocker, Harvard Business School Press (2006). This magnificent work marries economics and complexity science and imparts deep understanding of the current state and future of economics. I think of it as a modern-day Wealth of Nations. It deserves a wide audience and a prominent place in any economics library.

[4]Stock keeping units (SKUs).

The author is a Forbes contributor. The opinions expressed are those of the writer.


First computer made of carbon nanotubes is unveiled

By James Morgan Science reporter, BBC News

Max Shulaker with Cedric, the world's first carbon nanotube computer

Max Shulaker with Cedric, the first carbon computer: “There is no limit to the tasks it can perform”.

The first computer built entirely with carbon nanotubes has been unveiled, opening the door to a new generation of digital devices.

“Cedric” is only a basic prototype but could be developed into a machine which is smaller, faster and more efficient than today’s silicon models.

Nanotubes have long been touted as the heir to silicon’s throne, but building a working computer has proven awkward.

The breakthrough by Stanford University engineers is published in Nature .

Cedric is the most complex carbon-based electronic system yet realised.

So is it fast? Not at all. It might have been in 1955.

Continue reading the main story

Cedric’s vital statistics 

  • 1 bit processor
  • Speed: 1 kHz
  • 178 transistors
  • 10-200 nanotubes per transistor
  • 2 billion carbon atoms
  • Turing complete
  • Multitasking

The computer operates on just one bit of information, and can only count to 32.

“In human terms, Cedric can count on his hands and sort the alphabet. But he is, in the full sense of the word, a computer,” says co-author Max Shulaker.

“There is no limit to the tasks it can perform, given enough memory”.

In computing parlance, Cedric is “Turing complete”. In principle, it could be used to solve any computational problem.

It runs a basic operating system which allows it to swap back and forth between two tasks – for instance, counting and sorting numbers.

And unlike previous carbon-based computers, Cedric gets the answer right every time.

Imperfection-immune”People have been talking about a new era of carbon nanotube electronics, but there have been few demonstrations. Here is the proof,” said Prof Subhasish Mitra, lead author on the study.

The Stanford team hopes their achievement will galvanise efforts to find a commercial successor to silicon chips, which could soon encounter their physical limits.

Hand holding carbon nanotube wafer

The transistors in Cedric are built with “imperfection-immune” designCarbon nanotubes (CNTs) are hollow cylinders composed of a single sheet of carbon atoms.

They have exceptional properties which make them ideal as a semiconductor material for building transistors, the on-off switches at the heart of electronics.

For starters, CNTs are so thin – thousands could fit side-by-side in a human hair – that it takes very little energy to switch them off.

“Think of it as stepping on a garden hose. The thinner the pipe, the easier it is to shut off the flow,” said HS Philip Wong, co-author on the study.

But while single-nanotube transistors have been around for 15 years, no-one had ever put the jigsaw pieces together to make a useful computing device.

So how did the Stanford team succeed where others failed? By overcoming two common bugbears which have bedevilled carbon computing.

First, CNTs do not grow in neat, parallel lines. “When you try and line them up on a wafer, you get a bowl of noodles,” says Mitra.

The Stanford team built chips with CNTs which are 99.5% aligned – and designed a clever algorithm to bypass the remaining 0.5% which are askew.

They also eliminated a second type of imperfection – “metallic” CNTs – a small fraction of which always conduct electricity, instead of acting like semiconductors that can be switched off.

To expunge these rogue elements, the team switched off all the “good” CNTs, then pumped the remaining “bad” ones full of electricity – until they vaporised. The result is a functioning circuit.

The Stanford team call their two-pronged technique “imperfection-immune design”. Its greatest trick? You don’t even have to know where the imperfections lie – you just “zap” the whole thing.

Carbon nanotube wafer next to computer using this technology

Cedric can only count to 32, but in principle it could count to 32 billion”These are initial necessary steps in taking carbon nanotubes from the chemistry lab to a real environment,” said Supratik Guha, director of physical sciences for IBM’s Thomas J Watson Research Center.

But hang on – what if, say, Intel, or another chip company, called up and said “I want a billion of these”. Could Cedric be scaled up and factory-produced?

In principle, yes: “There is no roadblock”, says Franz Kreupl, of the Technical University of Munich in Germany.

“If research efforts are focused towards a scaled-up (64-bit) and scaled-down (20-nanometre transistor) version of this computer, we might soon be able to type on one.

Shrinking the transistors is the next challenge for the Stanford team. At a width of eight microns they are fatter than today’s most advanced silicon chips .

But while it may take a few years to achieve this gold standard, it is now only a matter of time – there is no technological barrier, said Max Shulaker.

“In terms of size, IBM has already demonstrated a nine-nanometre CNT transistor.

“And as for manufacturing, our design is compatible with current industry processes. We used the same tools as Intel, Samsung or whoever.

“So the billions of dollars invested into silicon has not been wasted, and can be applied for CNTs.

For 40 years we have been predicting the end of silicon. Perhaps that end is now in sight.


Sign of the Times? Peak Oil Website Shuts Down

Anthony Wile, Chief Editor for The Daily Bell writes:

For years, we’ve been pointing out that Peak Oil is a dominant social theme , a scarcity meme used by the powers-that-be to reinforce the US petrodollar and generally to control economic and sociopolitical elements of society.

And now comes word via various news reports including a story at MarketWatch that a main Internet proponent of the Peak Oil myth – The Oil Drum – is shutting its doors.

Here’s how MarketWatch describes it:

… A website created and frequented by advocates of “peak oil,” is closing its doors July 31 after an eight-year run. The site will be kept as a repository of old articles, but will no longer offer new ones, according to a post on the site dated July 3.

The decision was reached thanks to “scarcity of new content caused by a dwindling number of contributors” and the cost of running the site, the post said. The post garnered more than 700 comments from readers mourning the site’s virtual death. Commenters suggested “donate” buttons and other ideas to raise money.

With news of record-breaking North American oil and gas production seemingly every day, maybe it just got too hard to maintain a site devoted to the notion that the world’s oil production was at or near a peak … Detractors gleefully pointed out that the theory did not take into consideration technological advances, while defenders retort that, inevitably, demand for oil will outstrip supply, leading to higher oil prices and shortages.

Count us among the detractors.

For close to a decade, we’ve been identifying and exposing pernicious scarcity memes that seek to scare middle classes into giving up power and wealth to globalist enterprises such as the United Nations , International Monetary Fund , World Bank , etc. Peak Oil was among the higher profile of these memes probably because it was among the most useful.

When top men in the US government and at the Federal Reserve wanted to go off a gold standard , they used Peak Oil concepts to justify the petrodollar. People were led to believe that the only large-scale oil reserves were in the Middle East and then various green manipulations made it difficult to gain oil throughout the West. The Saudis agreed to demand dollars for oil and the rest was history. The world had to hold dollars and dollars became a “reserve currency.”

But now the game is changing. Shale oil and gas – extracted via fracking – have suddenly made it clear that far from suffering from oil scarcity, the world is awash in energy.

Here an excerpt from another recent MarketWatch article:

World to use less OPEC oil as U.S., Canada lead oil supply growth … The world will consume less oil from the Organization of the Petroleum Exporting Countries next year, even as the cartel increased its 2014 oil demand growth forecast to its highest since 2010.

Demand for OPEC crude next year is expected to decline by 300,000 barrels a day to an average of 29.6 million barrels a day, OPEC said Wednesday in its monthly report. The report is the first this year to make predictions for 2014. This year’s demand for OPEC oil was forecast as 29.9 million barrels a day, almost unchanged from the previous report, and a decline of 400,000 from 2012.

Supplies from non-OPEC nations are expected to grow by 1.1 million barrels a day in 2014, with the U.S. and Canada leading that growth, followed by Latin America and countries in the former Soviet Union.

Did you ever think, dear reader, that you would someday read “US and Canada lead oil supply growth”? Perhaps you felt along with millions – billions – of others that oil and gas were scarce commodities, mostly located in the Middle East.

What is noteworthy about fracking is that the technology has been around for decades. In fact, there’s nothing much that seems new about the process except its application. And that has led us to wonder why the technology has been rolled out now by Big Oil. Our conclusion, which we’ve offered in several articles, is that the petrodollar itself is being purposefully destabilized.

In order to globalize currency, one needs to diminish the dollar. Is it just a coincidence that the BRIC countries and their currencies have grown stronger as the dollar and the euro have weakened? Or that gold has taken a tumble?

Perhaps so, or perhaps various currencies are indeed being positioned to create a more globalized currency basket. This has been a globalist dream since John Maynard Keynes proposed a world currency some 50 years ago.

It is easy to ignore such ideas because they imply that there is enormous monetary manipulation and coordination at the very top of Western societies. But the Anglo-American axis still seems firmly in the saddle to me, and scarcity memes seem to be a main tool of various manipulations just as they have been in the past.

What is fascinating to me is the way that the mainstream news media has quietly shifted from covering an era of energy scarcity to reporting on an era of energy plenty. For half a century we’ve been exposed to vehement promotions regarding rapidly diminishing oil and gas supplies. And now that things are different, the mainstream media reports on these fundamental changes as if they were neither noteworthy nor peculiar.

The expansion of oil and gas supplies is a huge story. Assuming it continues to be for real (and does not prove out as yet another peculiar globalist gambit or get sidetracked by environmental concerns) it is going to have a profound impact on societies around the world and even on the money they use.

Like everything else that is critically important, these changes are taking place without the scale of coverage you’d think they warranted.

There’s a lot more to this story than is currently being reported. But we’ll stay on it. The investment implications are vast.


7 Technologies that Could Revolutionize Education

student-ipad-225x300With all the technology being developed these days, it’s easy to imagine how this technology can transition into classrooms.  But no, most classrooms are still the same as when we were kids, and even when our parents were in grade school.

Equipped with one or two blackboards or whiteboards, chairs with tables for students, some art on the wall, books on shelves and other whatnots found in common classrooms,  could the lack of technological improvements in classrooms attribute to how poorly students are doing in today’s economy?

During The Next Web Conference Europe, Pearson Chief Digital Officer Juan Lopez-Valcarcel talked about how 46 percent on US college students do not graduate, and those that do, 40 percent are told that they do not have the right skills for a job they are applying for.  this may be due to the fact that while tuition has increased by 70 percent in the last 10 years, the value of the degree has declined by 15 percent in the same time frame.  But the biggest problem is, for in-demand jobs today and in the future, pertinent courses are not being offered in universities or colleges.  For instance, if someone wants to be Data Scientist – you won’t find a school offering  a course specific to that job.

So how can things change?  How can technology help transform the state and quality of education?

7 technologies that could revolutionize education 

  • Invisible Computers

Though some schools are equipping students with tablets to facilitate their learning, the device still poses as a barrier as it could be a distraction for some students. And more problems occur when a tablet is left at home, misplaced or broken.  What happens to students in this scenario?  Lopez-Valcarcel proposes a future with invisible computers, where classroom objects, like a desk, will serve as a computer and all the data is available whenever needed.  Teachers can give assignments to students and not worry about the student not being informed about it.

  • Body Language Assessment

Today, some use analytics to predict how well a student will do in an exam based on previous tests or class participation.  With that, the teacher can do something about it, like offer to tutor for o level chemistry tuition, the student before the exam, so the student’s chance of failing is reduced.

In the future, Lopez-Valcarcel proposes that instructors use body language assessment to determine whether a student is able to follow the discussion.  If a student is writing slowly, keeps wrinkling his forehead, stares blankly at the board – these may be signs that a student is struggling to understand, signs that a teacher needs to be aware of.  By becoming aware, the teacher can stop and explain things further before going to a new topic or ending the discussion.

  • Robot-Assisted Learning

Though some students curse their teachers, sometimes they still need help.  Though Lopex-Valcarcel talked about an MIT project that equipped Ethiopian children with tablets and no instructions but they were able to learn on their own, not everything will come as easily.  He made an example of Korea, where instead of hiring people to teach kindergarten students the English language, since there’s a scarcity in English teachers in Korea, classrooms were equipped with robots that facilitate assisted learning of the language.  Why robots?  Because kids respond better with robots than a tablet.

  • Global Rockstar Teachers

Massively Online Open Courses seem to be a hot commodity these days.  This allows you to take courses online and earn a degree online.  Some courses just offer reading lists that you need to study and take an exam, while others have videos that students can download.  What the future holds is an array of Global Rockstar Teachers – teachers that impart their knowledge online, either via live streaming or one on one sessions.  This means that the best teachers will be available for anyone in the world, these teachers will no longer be confined in the top schools that are insanely hard to get in.

  • Stealth Learning

Educational games are boring and they may soon go the way of the dinosaur.  So how can you make learning more fun, or how can you use it as a tool to use for training and hiring?  Make popular games more educational.  Players won’t even have to know about it.  Wouldn’t it be more fun to go to school and be asked by your teachers to play your favorite game for the whole period?  At first you’d probably think there’s a catch, but haven’t you noticed how you retain more information about the things you like effortlessly?  Perfect example, my little brother passed a test regarding Dante’s Inferno by playing the game on PS3.

  • Social Learning

Almost everyone is on some kind of social platform.  There’s Facebook, Twitter, Flickr, Google+ and numerous others.  Teens use these to communicate about anything that’s happening in various aspects of their lives.  Some use social networks to ask their classmates if there are any assignments, or if they have any questions regarding a project.  If educators could take social media a step further and use it to deliver educational material to students, it could help students engage and learn more.

  • Open Hardware

There are many devices available for consumers these days and some are currently only available to developers.  What if Google Glass was made available to students?  Can they make something better for it, or with it?  If students are given access to developer-aimed devices, could they possibly be the next big name in Silicon Valley?

Will technology teach or distract in the classroom?

But could these technologies really help in education or will they just cause more problems for students?  Joining Kristin Feledy in this morning’s NewsDesk is NewsDesk Director Winston Edmondson to give his Breaking Analysis on whether technology is really something that schools need or if all these issues have a deeper root.

“The biggest barrier is actually the culture within a school district.  Before I talk about invisible education, let’s talk about invisible administration.  You can look at two examples like  Lewisville Independent School District where they brought in Dr. Stephen Waddell who had as very open minded technology.  He actually instituted changes that are embracing all these technology trends .  So you’re seeing education actually changing before your eyes.  So that’s probably the biggest barrier.  You need to have an administration that’s willing have some of these exciting, yet disruptive, technologies,” Edmondson stated.

For more of Edmondson’s Breaking Analysis, check out the NewsDesk video below:

Technology Trends that are Changing the Face of Education – Breaking Analysis