News and Research

Featured video: MIT Hyperloop

A team of MIT students, including LGOs, are competing in the SpaceX Hyperloop Pod Competition in California.
Read more

Lgo

MIT Students Tour Pratt & Whitney’s Columbus Facility

A group of more than 50 students and faculty members from MIT’s Leaders for Global Operations program toured the Columbus Engine Center on January 9 to experience what it’s like to work in a high-tech manufacturing business.

January 11, 2017 | More

Researchers design one of the strongest, lightest materials known

A team of researchers at MIT, including Markus Buehler, the head of MIT’s Department of Civil and Environmental Engineering (CEE) and LGO thesis advisor, has designed one of the strongest lightweight materials known, by compressing and fusing flakes of graphene, a two-dimensional form of carbon. The new material, a sponge-like configuration with a density of just 5 percent, can have a strength 10 times that of steel.

January 6, 2017 | More

From Teen Mom To 3 MIT Degrees

This Latina shares her secrets to making the most of your career with Women@Forbes. Noramay Cadena (LGO ’11) is a #bosslady in all aspects of her life.

November 17, 2016 | More

MIT program enrolls first student from the Philippines

The MIT Sloan School of Management recently announced that the first student from the Philippines, Dominique Rustia, matriculated into its prestigious Leaders for Global Operations (LGO) Program. In the MIT LGO program, Rustia will earn both an MBA from MIT Sloan and an SM from MIT’s School of Engineering.

November 5, 2016 | More

Video: Electric motors find new roles in robots, ships, cars, and microgrids

James Kirtley, LGO advisor and professor in MIT’s Department of Electrical Engineering and Computer Science and in MIT’s Research Laboratory of Electronics. “Electric motors are being used more widely in ships, airplanes, trains, and cars. We’re also seeing a lot more electric motors in robots.”

October 25, 2016 | More

How to teach the unteachable

In August, LGO advisor Leon Glicksman, an MIT professor of architecture and mechanical engineering, and John Lienhard, a professor of mechanical engineering, published “Modeling and Approximation in Heat Transfer” (Cambridge University Press).

The product of a nearly 20-year-long collaboration between them, the book explores the challenges faced by engineers in systems design and research. Mastery of fancy calculations is well and good, they argue, but students must also acquire a critical and often neglected skill set: the ability to think in physical terms. To this end, the authors focus on how modeling and synthesis can be carried out in practice. This is about thinking the big picture: how to get started, how to identify key physical variables in a problem, how to focus your attention toward what matters.

The School of Engineering recently spoke with the coauthors (who replied collectively) by email about their new text.

Q: How would you describe the origins of this textbook?

A: Many excellent textbooks on thermal science are already available, but most of them lack systematic discussions of how modeling and synthesis can be carried out in practice. Specifically, most textbook problems have already bee

October 21, 2016 | More

Translating a Biologic Revolution into an Organizational Overhaul

MIT LGO students and professors work with Mass General Hospital to redesign healthcare processes and bring novel therapies to patients.

October 20, 2016 | More

MIT Alumna is Industry-Tested, Tesla-Approved

Unlike most automotive manufacturers, Tesla has no status quo. That’s good news for Grace Overlander (LGO ’08) who believes that we can always make things better.

October 17, 2016 | More

How the Chemical Industry Joined the Fight Against Climate Change

Ken Gayer (LGO ’98), VP and General Manager of Honeywell Fluorine Products, is quoted in this NY Times article.

October 16, 2016 | More

Electron-phonon interactions affect heat dissipation in computer chips

LGO professor Gang Chen and his research group say cellphones, laptops, and other electronic devices may face a higher risk of overheating, as a result of interactions between electrons and heat-carrying particles called phonons.

The researchers have found that these previously underestimated interactions can play a significant role in preventing heat dissipation in microelectronic devices. Their results are published today in the journal Nature Communications.

In their experiments, the team used precisely timed laser pulses to measure the interactions between electrons and phonons in a very thin silicon wafer. As the concentration of electrons in the silicon increased, the more these electrons scattered phonons and prevented them from carrying heat away.

“When your computer is running, it generates heat, and you want this heat to dissipate, to be carried out by phonons,” says lead author Bolin Liao, a former graduate student in mechanical engineering at MIT. “If phonons are scattered by electrons, they’re not as good as we thought they were in carrying heat out. This will create a problem that we have to solve as chips become smaller.”

On the other hand, Liao says this same effect may benefit thermoelectric generators, which convert heat directly into electrical energy. In such devices, scattering phonons, and thereby reducing heat leakage, would significantly improve their performance.

“Now we know this effect can be significant when the concentration of electrons is high,” Liao says. “We now have to think of how to engineer the electron-phonon interaction in more sophisticated ways to benefit both thermoelectric and microelectronic devices.”

Liao’s co-authors include Gang Chen, the Carl Richard Soderberg Professor in Power Engineering and the head of the Department of Mechanical Engineering; Alexei Maznev, a senior research scientist in the Department of Chemistry; and Keith Nelson, the Haslam and Dewey Professor of Chemistry.

Blocking flow

In transistors made from semiconductor materials such as silicon, and electrical cables made from metals, electrons are the main agents responsible for conducting electricity through a material. A main reason why such materials have a finite electrical resistance is the existence of certain roadblocks to electrons’ flow — namely, interactions with the heat-carrying phonons, which can collide with electrons, throwing them off their electricity-conducting paths.

Scientists have long studied the effect of such electron-phonon interactions on electrons themselves, but how these same interactions affect phonons — and a material’s ability to conduct heat — is less well-understood.

“People hardly studied the effect on phonons because they used to think this effect was not important,” Liao says. “But as we know from Newton’s third law, every action has a reaction. We just didn’t know under what circumstances this effect can become significant.”

Scatter and decay

Liao and his colleagues had previously calculated that in silicon, the most commonly used semiconductor material, when the concentration of electrons is above 1019 per cubic centimeter, the interactions between electrons and phonons would strongly scatter phonons. And, they would reduce the material’s ability to dissipate heat by as much as 50 percent when the concentration reaches 1021 per cubic centimeter.

“That’s a really significant effect, but people were skeptical,” Liao says. That’s mainly because in previous experiments on materials with high electron concentrations they assumed the reduction of heat dissipation was due not to electron-phonon interaction but to defects in materials. Such defects arise from the process of “doping,” in which additional elements such as phosphorous and boron are added to silicon to increase its electron concentration.

“So the challenge to verify our idea was, we had to separate the contributions from electrons and defects by somehow controlling the electron concentration inside the material, without introducing any defects,” Liao says.

The team developed a technique called three-pulse photoacoustic spectroscopy to precisely increase the number of electrons in a thin wafer of silicon by optical methods, and measure any effect on the material’s phonons. The technique expands on a conventional two-pulse photoacoustic spectroscopy technique, in which scientists shine two precisely tuned and timed lasers on a material. The first laser generates a phonon pulse in the material, while the second measures the activity of the phonon pulse as it scatters, or decays.

Liao added a third laser, which when shone on silicon precisely increased the material’s concentration of electrons, without creating defects. When he measured the phonon pulse after introducing the third laser, he found that it decayed much faster, indicating that the increased concentration of electrons acted to scatter phonons and dampen their activity.

“Very happily, we found the experimental result agrees very well with our previous calculation, and we can now say this effect can be truly significant and we proved it in experiments,” Liao says. “This is among the first experiments to directly probe electron-phonon interactions’ effect on phonons.”

Interestingly, the researchers first started seeing this effect in silicon that was loaded with 1019 electrons per cubic centimeter — comparable or even lower in concentration than some current transistors.

“From our study, we show that this is going to be a really serious problem when the scale of circuits becomes smaller,” Liao says. “Even now, with transistor size being a few nanometers, I think this effect will start to appear, and we really need to seriously consider this effect and think of how to use or avoid it in real devices.”

This research was supported by S3TEC, an Energy Frontier Research Center funded by the U.S. Department of Energy’s Office of Basic Energy Sciences.


October 12, 2016 | More

Sloan

A live conversation with Chris Knittel: Uber and racial discrimination

Join us on February 15th, 12 noon to 12:30 ET for a live conversation with Chris Knittel, professor of applied economics at MIT Sloan, who will talk about his latest research on racial bias in the sharing economy—how Uber and Lyft are failing black passengers and what to do about it.

Eva Millona, the Executive Director of the Massachusetts Immigrant and Refugee Advocacy Coalition (MIRA), will also appear on the program to discuss ways Uber and Lyft can work on mitigating discrimination.

You will be able to view the live show by bookmarking this site and tuning in February 15th at 12 noon ET.

Submit your questions to #MITExpert on Twitter before 11 am ET on Feb. 15th. Your question could be answered live on the air.

Christopher Knittel is the George P. Shultz Professor and a Professor of Applied Economics at the MIT Sloan School of Management.

January 23, 2017 | More

Legal Challenge To Clean Power Plan Will Have Global Ramifications

The DC. Court of Appeals began hearing arguments recenlty in an historic session taken “en banc” – with a roster of 10 judges hearing a case that challenges President Obama’s Clean Power plan. Specifically, some industry associations are challenging new targets for coal plants that would require a 32 percent reduction in carbon emission by 2030. More generally, the case highlights challenges to Obama’s use of his executive powers to regulate the electricity industry in a way that will help the US meet international targets for reduction in carbon emissions.

Obama’s use of the Clean Air Act to bring his Clean Power Plan to fruition has been
called “vast legal overreach” by some law professors who have said it is tantamount to burning the constitution.

While it may take weeks or even months for the court to rule, the case highlights the extreme importance of keeping U.S. plans to reduce emissions on track in order to spur continued global cooperation on global warming. With the U.S. election continuing to create its own heat, there are many interlocking and swiftly moving pieces on the global climate change front.

The world needs continued leadership from the U.S. and a ruling by the court that Obama had overreached would impact progress globally.

The hearing comes on the heels of last November’s Conference of the Parities on Climate Change in Paris (COP-21) where more than two decades of difficult and very tenuous multilateral negotiations on collective action on climate change concluded in a delicate compromise. The effectiveness of this agreement hinges on nations adhering to – and eventually ratcheting up – their nationally determined contributions (NDC). Other big emitters are living up to their commitments. India, for example, is set to ratify its COP-21 contributions next month on the historic occasion of Gandhi’s birthday, October 2; just last month, China’s President Xi Jinping announced the ratification of China’s COP-21 commitments in a joint press conference with President Obama. China and India rank first and third, respectively, in global greenhouse gas emissions. The U.S. ranks second. Combined these three countries account for nearly one-half of all emissions.

Read the full post at The Huffington Post

Christopher Knittel is the George P. Shultz Professor and a Professor of Applied Economics at the MIT Sloan School of Management.

January 18, 2017 | More

Why entrepreneurs in the developing world need new funding models

Increasingly, it is innovation-driven entrepreneurs who are providing effective and scalable solutions rather than aid agencies or governments.

Traditionally, the focus of entrepreneurship in the developing world has been on creating small- and medium-sized enterprises serving local markets. However, that emphasis must shift from small firms to what MIT calls innovation-driven enterprises: start-ups that can scale for significant impact.

Building an innovation-driven enterprise is full of challenges for any entrepreneurial team. They must find an appropriate beachhead market, prototype and pilot, and recruit and retain top talent. They also require specialised entrepreneurial finance at each stage.

For development entrepreneurs, access to appropriate types of capital is a significant constraint.

Their challenges are not just about the limited availability of institutionalised venture capital, but to the full range of “risk capital” options, from initial financing by friends and family and angel investors to VCs, private equity and commercial banking. The creation of a pipeline of financial instruments is a critical bottleneck.

Building a spectrum of financial instruments to support entrepreneurs in emerging economies requires engagement from all key stakeholders: government policymakers, aid agencies, investors, philanthropists and universities.

Rather than simply replicating instruments that support entrepreneurs in economies with robust institutions, financing entrepreneurship for developing world impact requires listening to entrepreneurs, understanding their needs, and designing accordingly.

Read the full post at City A.M.

Fiona Murray is the William Porter (1967) Distinguished Professor of Entrepreneurship, the Associate Dean for Innovation, Co-Director of the Innovation Initiative, Faculty Director of the Legatum Center, and recently appointed as a Member of the UK Prime Minister’s Council for Science and Technology (CST).

December 12, 2016 | More

Study: Mobile-money services lift Kenyans out of poverty

Since 2008, MIT economist Tavneet Suri has studied the financial and social impacts of Kenyan mobile-money services, which allow users to store and exchange monetary values via mobile phone. Her work has shown that these services have helped Kenyans save more money and weather financial storms, among other benefits.

Now, Suri is co-author of a new paper showing that mobile-money services have had notable long-term effects on poverty reduction in Kenya — especially among female-headed households — and have inspired a surprising occupation shift among women.

Published in today’s issue of Science, the study estimates that, since 2008, access to mobile-money services increased daily per capita consumption levels of 194,000 — or 2 percent — of Kenyan households, lifting them out of extreme poverty (living on less than $1.25 per day).

But there’s an interesting gender effect: Female-headed households saw far greater increases in consumption than male-headed households. Moreover, mobile-money services have helped an estimated 185,000 women move from farming to business occupations.

“Previously, we’ve shown mobile money helps you with financial resilience. But no one has understood, if you improve resilience, what happens over the longer term. This is the first study that looks at long-term poverty reduction and at gender,” says Suri, an associate professor at the MIT Sloan School of Management, who co-authored the paper with longtime collaborator William Jack, an economist at Georgetown University.

By 2015, more than 270 mobile-money services were operating in 93 countries, with an estimated 411 million accounts. The Kenyan study is important, Suri says, because it shows that mobile-money services aren’t just conveniences but do, in fact, have a positive impact on people’s livelihoods. “[That] can be useful for regulators trying to figure out if they want to allow it in their country, or whether someone wants to start a service in their country as an entrepreneur,” Suri says.

Measuring “agent density”

The study looks at M-PESA, the country’s most popular service, which launched in 2007 and has more than 25 million Kenyan users. There are more than 120,000 M-PESA agents scattered around the country, who handle deposits and withdrawals.

In 2010, Suri and Jack co-authored a study that showed M-PESA helped users borrow, save, and pay for services more easily. A 2012 study by the pair showed M-PESA helped Kenyans manage financial uncertainties caused by crop failures, droughts, or health issues. The idea is the M-PESA users can use a wider network of support, and receive payments more quickly, during dire financial times.

This new paper is “the grand finale” of the researchers’ long-term examination of the impact of M-PESA in Kenya, Suri says. For this study, the researchers compiled surveys of 1,600 households across Kenya over the years, looking at, among other things, average daily per capita consumption — meaning total money spent by the individual and household — and occupational choices.

Instead of looking at the number of individuals using M-PESA, the researchers measured the rise in the number of service agents within 1 kilometer around each household — or “agent density” — during early rollout of the mobile-money services. They then compared the consumption and occupation, and other outcomes, of households that saw relatively large increases of agent density, with those that saw no increases or much smaller ones, over the years.

Not surprisingly, households where agent density increased by five agents — the average in the sample — also saw a 6 percent increase in per capita consumption, enough to push 64 (or roughly 4 percent) of the sampled households above poverty levels. The World Bank defines spending less than $1.25 per day as “extreme poverty,” and spending less than $2 per day as “general poverty.” Mean daily per capita consumption among the sample was $2.50.

The impact was even more pronounced among female-headed households. When agent density rose — from zero to six agents over six years — these households saw a daily per capita consumption increase of about 18.5 percent. This level of agent density growth also reduced extreme poverty among female-headed households by 9.2 percent, and reduced households in general poverty by 8.6 percent.

Another surprising finding, Suri says, was that increases in agent density caused about 3 percent of women in both female- and male-headed households to take up business or retail occupations over farming. These occupations generally entailed single-person businesses based around producing and selling goods, which is made easier by mobile money, Suri says. “You used to grow vegetables, but now you take your vegetables to the market and sell them, or you open a little food cart or kiosk,” she says.

Using extrapolation methods on their data, the researchers estimate that the spread of mobile-money services has helped raise 194,000 Kenyan households out of extreme poverty, and induced 185,000 women to work in business or retail occupations over farming.

“Suri and Jack’s results are provocative and enticing,” says Dean Karlan, a professor of economics at Yale University. “Provocative in that they find long-term impacts on poverty for women from an important, growing, and profitable business innovation, mobile money. Enticing in that they show us a clear base on which further innovation can and should expand, to find even better ways to use mobile money to target specific problems and make important impacts on issues of poverty around the world.”

Savings and independence

Exactly why M-PESA causes increases in per capita consumption and shifts in occupation remains unclear, Suri says. But the researchers have a few ideas, one being that more secure storing of money leads to better financial management and savings, especially among women: The study found that female-headed households that saw greater agent density also saw around a 22 percent rise in savings.

The researchers also think mobile money could give women in male-headed households, who are also usually secondary income earners, more financial independence, which could help them start their own businesses. “As a woman, sometimes you’re not able to save on your own, because cash gets used by the whole house. [Mobile money] allows you to keep separate cash and … manage a source of income on your own,” Suri says.

Moving forward, Suri and Jack now aim to conduct similar research on the impact of mobile-money services on poverty in Uganda, Tanzania, and Pakistan “to find out if this is just an effect for Kenya or more systematic across other countries,” Suri says.

The research was funded, in part, by Financial Sector Deepening Kenya and the Bill and Melinda Gates Foundation.


December 8, 2016 | More

How to win in a winner-take-all world

Analyst R “Ray” Wang’s lessons from the digital revolution. Leading digital companies are taking 70 percent of market share and 77 percent of profits in their industries, R “Ray” Wang told students Dec. 1 at MIT Sloan. Meanwhile, more than half of Fortune 500 companies lost money last year while more than half of the companies on the list since 2000 are gone—merged, acquired, bankrupt, or off the list, Wang said.

In this winner-take-all economy, what makes a winner? Wang has been studying and working with large, global companies and executives for his entire career. As the principal analyst, founder, and chairman of Constellation Research, he’s watched the digital disruption of the last decade and helped major companies navigate digital transformation.

Wang spoke with students as part of MIT Sloan’s Innovative Leadership Series. Here’s some of his advice for leaders looking to take on—and take over—an industry.

Invert your priorities. Focus on differentiation and brand.
Too many companies are focusing too much on regulatory compliance and operational efficiency, keeping the motor running and pursuing incremental growth, Wang said.

Strategic differentiation and brand strategy are not getting the attention they deserve, he argued.

“This is the fastest way to fail. This is why every Fortune 500 [company] has lost in the marketplace,” Wang said. “Because they optimized to this. It should be the other way around. ‘Why do you exist? What is your mission? What is the business model you want to build?’ Then figure out the products, the services, the insights, the experiences, and outcomes you want to support in that business model.”

Compliance and efficiency should be automated and outsourced, he said. “If you do that, you actually have a start at jump-starting growth.”

Data is the foundation of digital business.
“You can’t do digital without data,” Wang said. “We’re going from gut-driven decisions, which you should still have, to data-driven decisions, which tell you if you’re right or wrong.”

Data leads to information that can be shared with customers for a fee, in an effort to provide a better service. Will consumers pay delivery companies for the ability to track a delivery driver on a Waze-like traffic app, in turn allowing them to optimize their own time while waiting for a package? Wang thinks they will.

“It’s data-driven and we’re going to see more and more services like this,” he said. “And it doesn’t cost a lot of money to implement that.”

Goodbye middleman. Hello direct connection.
That car dealer you’re certain ripped you off. You don’t have to deal with him anymore. The car company Tesla won’t haggle even if you want to—CEO Elon Musk has decreed no discounts to anyone, anywhere—but it will deliver your new car to your door. Digital disruption allows more and more large companies to sell directly to consumers, a model Dell pioneered in the 1980s.

“Nobody wants a middleman, unless that middleman is adding value,” Wang said. The same market for direct service is seen in commerce platforms like Etsy and service platforms like TaskRabbit, Wang said. In both cases, digital platforms function as what MIT Sloan professor emeritus Richard Schmalensee calls “matchmakers,” reducing friction and distance between customers and service providers.

Integrate. Partner strategically and carefully.
In the growth and reach of Amazon, Wang sees a case study for winner-take-all success. The company was once only a bookseller, but it used early partnerships to gather competitor data, Wang said. As a cloud computing provider with Amazon Web Services, it has insight into the Internet traffic of competitors, he said.

“They learned from their competitor’s data how to do the job better,” Wang said.

Through a new partnership with the U.S. Postal Service, the company now delivers on Sundays, giving it an edge over FedEx and UPS. The recent purchase of the Washington Post by Amazon CEO Jeff Bezos gives Bezos an opportunity to distribute news through the e-reader Kindle, an Amazon product, Wang said.

“This is a content, network, and technology platform all in one,” he said. “This is why nobody can beat them.”

“You’re going to see one of these emerge [in every industry],” he said. “And the question is ‘How do you build a partnership around content, network, and technology platforms?’ Because you’re not going to have all those pieces. So you’ve got to figure out which partners to assemble to go drive this, in terms of building the next set of vertically-integrated monopolies.”

December 3, 2016 | More

In London, extending MIT’s entrepreneurial ecosystem

Five events build connections between industry, startups, governments, faculty members, and financiers. A rider, or “champion,” for Nigerian-startup MAX, which provides “last mile delivery” through a digital platform. Co-founder Adetayo Bamiduro, MBA ’15, will speak at an MIT conference in London Dec. 13.

In Lagos, Nigeria, a man straddles a motorcycle. He speeds off into the sprawling network of streets in Africa’s largest city to deliver a package.

The biker is a delivery driver for MAX, a fledgling Nigerian startup that is trying to solve the problem of so-called “last mile delivery.” How do you pick up and deliver packages or food in an efficient way in a city where many people don’t have an address?

MAX has built a mobile and web platform for deliveries, not unlike the system used by Uber and other on-demand services. It seeks goal-driven, customer service-oriented drivers, offers no-interest loans so drivers can own their motorcycles, provides a training program, and gives drivers a smartphone.

Then the drivers—30 of them so far, all of whom are “champions” in MAX parlance—head out on deliveries. They geotag the location of customers using the company’s app, creating an address that can be used for future drop-offs.

“We are not the first company to try to tackle this problem in Lagos or on the continent,” said Adetayo Bamiduro­, MBA ’15, co-founder of MAX. “We’re the first company to think about it this way. The problem we’re solving requires a very detailed systematic approach. We’re not just building logistics. We’re not just building retail.”

“If you look at our systematic diagram, you can see how all of these factors together are driving the growth of the entire platform,” he said. “And that’s something you learn at MIT.”

Bamiduro believes he can expand MAX across Africa. He is building a retail component that will allow people to shop locally through the MAX app. MAX, of course, will then deliver the purchase.

This month, the young entrepreneur will travel to London to tell MAX’s story—how it was conceived, how it was built, how it can scale—and to meet with investors, government officials, entrepreneurs, and professors at an MIT conference on entrepreneurship and finance in the developing world.

Showcasing innovation-driven enterprises
The Dec. 13 conference is hosted by the MIT Legatum Center for Development and Entrepreneurship, where Bamiduro was a fellow in 2014–2015. Along with a Dec. 2 startup showcase by the MIT Startup Exchange and the MIT Industrial Liaison Program, it brackets a series of MIT gatherings in London this month. Together, the events demonstrate the facets of an innovation ecosystem unique to MIT, one where theory meets practice, where innovation-driven entrepreneurs are bringing new technologies from labs to markets, and where, very often, the drive for positive social impact is woven into companies’ operations from the outset.

The sort of ecosystem where a company like MAX is born.

“The theme that is embedded into the [Legatum] conference, that is very MIT, is this idea of MIT producing innovation-driven enterprises. Those lead to both social and economic impact around the world,” said Georgina Campbell Flatter, the executive director of the Legatum Center. “Innovation-driven enterprises tend to lead to businesses that will scale. You can reach global markets eventually. You can provide thousands of jobs. Take Adetayo, he’s touching thousands of people’s lives through his business.”

Georgina-Campbell-Flatter “The theme … that is very MIT, is this idea of MIT producing innovation-driven enterprises,” said Legatum Center executive director Georgina Campbell Flatter.

In London, MIT will showcase this approach in five gatherings. Taken together, this series of unique offerings demonstrates how MIT functions not simply as a collection of individual units, centers, programs, and events, but as a facilitator and leader in global innovation and impact.

Dec. 2 – 2016 MIT Startup Showcase London
More than a dozen MIT-affiliated startups will present at this event connecting industry leaders with innovative startups in enterprise information technology, data analytics, automation, security, and more. The event will include a keynote talk on innovation through analytics from MIT Sloan professor Dimitris Bertsimas, who will also join a panel on managing innovation.

This is the third event of its kind for MIT Startup Exchange, which is led by former MIT Sloan lecturer Trond Undheim. The group chose London because it is a “global innovation capital,” Undheim said.

Startups presenting “Lightning Talks” at the showcase all feature a technology innovation at their center. Tulip Interfaces, for example, is an internet of things platform company for the manufacturing sector.  Luminoso is an artificial intelligence-equipped “deep analytics” company that helps clients like Autodesk and Sprint gain insight into customer sentiment and feedback.

Tulip-interfaces Tulip Interfaces, an internet of things platform company for manufacturing, will join MIT’s Dec. 2 startup showcase in London.

Unlike startup incubators and accelerators, the MIT Startup Exchange does not mentor new entrepreneurs or teach them how to execute on a business plan. Instead, it connects mature startups with industry leaders for strategic partnerships, product development collaborations, new business lines, and new customers.

The Dec. 2 startup showcase is “focused on technology, but also on … interaction in the innovation community,” Undheim said.

Dec. 7–8 – MIT Sloan Executive Board Meeting
Each of MIT Sloan’s four executive boards is led by a distinguished alumnus and composed of business, government, and academic leaders. The boards form a communication channel between the school and alumni and the wider world, both informing the work of the school and bringing new MIT research and discoveries from campus to the global community.

At this event, a meeting for the European, Middle Eastern, South Asian, and African Executive Board, members will hear from MIT Sloan Dean David Schmittlein on the state of the school, receive an update about the MIT Sloan Sustainability Initiative, learn about new work from the school’s research centers, and join finance professors Antoinette Schoar and David Thesmar for a conversation about the Brexit decision by United Kingdom voters to leave the European Union.

Dec. 7–9 – Visionary Investing Workshop
Part of MIT Sloan’s new Family Leaders with Purpose program, this workshop will help members of affluent families develop their skills around investing for both profit and purpose, regardless of where they are in their journey or their level of financial acumen. Family leaders cultivate and sharpen their own vision for doing good in a real-world interactive environment by accessing proprietary MIT tools, frameworks, and faculty.

SP Kothari is the program’s faculty director, supported by program co-heads David Shrier and Heidi Pickett.

“MIT Sloan is responding to feedback we’re getting directly from these families,” Kothari said.  “They have a passion for solving some the world’s toughest problems. What they’re seeking are the tools and skills to find and evaluate viable opportunities, and to craft an action plan that multiple generations can rally around. MIT has a unique capability set in finance, innovation, and impact.”

The immersive three-day workshop includes MIT Sloan faculty presentations on the implications of Brexit on the U.K. financial sector, managing a portfolio in uncertain times, and managing for impact. A well-received feature of the workshop, which has been held three times before, is a forum for participants to interact with guest CEOs of startups, engaging in a series of hands-on learning sessions incorporating frameworks and tools to evaluate the enterprises.

Participants are typically capable of investing a minimum of $10 million in direct deals each year, and attendees are expected from across Europe, Asia, and North America. Pickett said a goal of the program is to build a global community of like-minded families investing for profit and purpose in fields such as health, energy, environment, water, education, agriculture, finance, and real estate.

Dec. 12MIT Regional Entrepreneurship Acceleration Program reunion
From Morocco, Al Madinah, Nova Scotia, Iceland, Singapore, and elsewhere, teams of government officials, investors, entrepreneurs, academics, and business leaders will meet to discuss how they are building local entrepreneurial ecosystems.

The gathering brings together participants in the MIT Regional Entrepreneurship Acceleration Program, a two-year global executive education initiative designed to facilitate economic growth, social progress, and job creation in regions worldwide.

The program, known as MIT REAP, is in its fourth year. Nearly 30 teams from around the world have participated. Now, MIT is launching a Global Innovation Network, a group for MIT REAP alumni to network and facilitate connections, conversation, and support between the teams.

“We used to think of REAP as this two-year long program, but actually participants enjoy it so much they want to continue the relationship,” said Phil Budden, an MIT Sloan senior lecturer who teaches in the program and is its diplomatic adviser.

The afternoon meeting will include updates on MIT entrepreneurship activity, including MIT’s new technology and science incubator, called The Engine, and a new entrepreneurship and innovation minor, as well as updates from six of the teams on their work in the program, said Sarah Jane Maxted, executive director of the program.

“It will be a success if we give them a chance to share their experiences,” said Professor Fiona Murray, associate dean for innovation and co-director of the MIT Innovation Initiative. “For teams that are much further along—because they are now four years into this whole experience of developing and accelerating their ecosystems—we give them a chance to share with teams that joined REAP more recently. We’ll know it’s successful when we’re really building those bridges across cohorts.”

Dec. 13 – Accelerating Developing World Growth Through Entrepreneurship and Finance: A Conversation Led by MIT
“We don’t just want to have a conversation focused on U.S. investors and U.S. alumni,” says the Legatum Center’s Flatter.

So Flatter, MAX’s Bamiduro, and many others in the MIT, global investing, and developing world entrepreneurship communities will meet in London, due to its greater proximity to Africa and other developing world regions. As with the MIT Startup Exchange conference, entrepreneurs will be at the center of every discussion, presenting TED-style “Vision Talks” and joining investors and academics for panels covering seed stage capital, crowdfunding, and philanthropic capital, among other topics.

“Having that entrepreneur perspective just brings a different lens to the problem,” Flatter said. “When you bring together governments and corporations, and they have a conversation about entrepreneurship, they’re often talking about what they think is best based on secondary data. Don’t get me wrong—they’re often working on great things and working for a great cause. But when the entrepreneur is in the room, the conversation tends to be different. I think MIT brings that.”

The goal of the conference is to facilitate a conversation about funding mechanisms while showcasing and celebrating the entrepreneurs working in the developing world, Flatter said. Those funding mechanisms include equity crowdfunding, programming-related investments, grants and fellowships, and prize money from entrepreneurship competitions.

“Our mission is to foster broad-based prosperity around the world through entrepreneurship,” Flatter said. “One of the sticking points for early-stage entrepreneurs is how to raise the capital they need, and in the developing world it’s harder to raise capital because people are less willing to take risks. Traditional VCs and angels don’t tend to invest in emerging economies, but there are new funding mechanisms that our students and our alumni have found to be helpful.”

Entrepreneurs working in the developing world also face a unique set of ethical choices, as there can be few, if any, regulations governing minimum wage or health care coverage. In the way they structure their business operations, startup founders can choose to make a broader impact beyond simply offering a job or a needed service.

In providing drivers with no-interest loans to buy a motorcycle, for example, MAX is empowering the employees it relies on, Flatter said. Another MIT alumnus-founded startup appearing at the conference, Soko, relies on a “virtual factory” of independent jewelry makers around the developing world who retain 25 to 35 percent of the revenue generated when their products are sold to major retailers.

“At the core they are well-run businesses, for-profit businesses, but because of the principled entrepreneur leaders at the top they make business choices that are also ethical and ultimately have impact,” Flatter said.

A bias for action
Murray said the flurry of MIT innovation events in the city is distinguished by the way it brings so many different people together.

“These events engage all the key ecosystem stakeholders,” she said. “And I think that’s something that MIT over the years has done extremely well. I’m not sure it has always been conscious about it. I think it has become more conscious. But now, in a sense, we’re taking those same conversations [we host in Cambridge] to a different part of the world and saying ‘We want to have this conversation with you—risk capital providers, large corporations, policymakers, particularly focused on development, innovation, and entrepreneurship.’”

And the conversation won’t end in December. On January 13, MIT President L. Rafael Reif will visit London as part of MIT’s MIT Better World campaign, to discuss his vision for MIT’s future and for the MIT community worldwide. It’s a fitting follow-up to a series of events that demonstrates MIT’s singular impact on global innovation.

For his part, MAX’s Adetayo Bamiduro is looking forward to meeting other members of the MIT innovation ecosystem when he visits London for the Legatum Center event. He said his company would not exist today without his Legatum fellowship, which comes with $50,000 in tuition and a stipend and $10,000 in travel and prototyping grants.

“The cushion of the scholarship is a huge motivating factor for taking the plunge right out of school,” he said. “The entire MIT entrepreneurship ecosystem is very strong; it’s very powerful. The MIT environment is unique because you have management and execution skills that you learn. But you also learn engineering skills. We had the capability to build the system ourselves.”

“The MIT slogan—mind and hand—has really distinguished us,” Bamiduro said. And that ethos—the movement from idea to action, and from classrooms and labs into the world—is what MIT’s packed London agenda is all about.

For Bamiduro and his co-founder Chinedu Azodoh, MFin ’15, that means not just having a new idea, and not just building MAX’s digital platform and executing its business plan. It also means learning to ride motorcycles, zipping into Lagos’ busy streets to make deliveries when needed.

That man hopping on a motorcycle to deliver a package to one of MAX’s customers? That’s the CEO.

December 3, 2016 | More

Robots

Robots are moving in to our homes, but there’s no killer app

Not too long ago, robots were giant, caged things, mainly found in automotive manufacturing lines. Social robotics was a new field of research pursued by the best and brightest in university research labs.

In the past few years, however, it seems that social robots have finally come of age. All of a sudden, the market is teeming with products. Some are distinctly humanoid.

The rise of social robots

Softbank Robotics’ Nao, Pepper and Romeo all have a head and two arms. With their stylised designs, they deftly avoid the “uncanny valley” of human-machine interfaces (realistic enough to look human, but non-human enough to look spooky).

Others are more subdued in their anthropomorphism. Blue Frog Robotics’ Buddy sports an animated face on a screen, and scoots around on wheels. Jibo is yet more subtle in its ability to evoke humanity, with its stationary base and a head that can turn and nod.

What is a social robot, and what makes it special?

In her seminal paper, “Towards sociable robots”, Professor Breazeal, inventor of Jibo, describes social robots as follows: “(Social robots have) the ability to interact with people in an entertaining, engaging, or anthropomorphic manner.”

The most noticeable quality in the interactions between a person and a social robot is the emotion. This can be partially achieved via a speech-based interface. For example, Amazon is reputedly working on an enhancement to Alexa, the virtual assistant that lives inside the Echo device, to help it understand emotion. China’s Turing Robot goes a step further, and claims that its Turing Robot OS already understands emotion.

Read the full post at The Conversation.

Elaine Chen is a startup veteran, product strategy and innovation consultant, and author who has brought numerous hardware and software products to market.

December 1, 2016 | More

How to prepare for the cyberattack that is coming to your company

Cybersecurity is a $445 billion problem, and some predict that figure could rise to $6 trillion by 2021. The list of companies that have already been hacked, attacked, and breached – suffering business interruptions, intellectual property losses, and exposing their customers to identity theft – reads like a who’s who of the retail, tech, telecomm, manufacturing and financial services industries, among others. The finances, operations, customer data, R&D, intellectual property and brand reputations of all companies are at risk, which makes cybersecurity a fiduciary responsibility of the board and senior management. Yet in many organizations, top executives and board members still believe that cybersecurity is only an IT issue.

Nothing could be further from the truth; IT alone will never be able to address cybersecurity in a meaningful way. Sustainably addressing cyber risk requires an organization-wide and cross-functional approach, and the integration of cybersecurity and business strategy. Boards and senior management play a pivotal role in creating the organizational and cultural environment for such a joint approach. Top management and board members must recognize the risks involved and take steps to ensure they are prepared for the day that their company is compromised – because it’s all but certain it will happen.

Over the past year, in collaboration with the cyber resilience initiative of the World Economic Forum, BCG, MIT Connection Science and MIT Sloan (IC)3 have worked together to identify, design and test methods to effectively engage boards and other senior stakeholders on the critical complex issue of cybersecurity. While there are robust principles to be followed and tools to be employed to both help prevent attacks and to deal with attacks that have occurred, we have found one medium that is particularly well suited to boosting the engagement and preparedness of top management and board members: table top exercises that simulate cybersecurity events and their fallout in real time.

These exercises can be useful in at least three ways. The first is practicing incident response, business continuity and disaster recovery plans, as well as decision-making under pressure, so that top leadership is not introduced to the far-reaching ramifications of a cyber breach only when one has just occurred. Second, immersive and interactive exercises can be the most effective (and memorable) method of teaching the basic concepts of cybersecurity. Third, these exercises can be used as a laboratory for developing and testing cost-effective strategies for cybersecurity defense and mitigating the consequences of cyberattacks.

Practicing incident response

Military commands play war games (including cyberwar games). Schools and office buildings practice evacuation procedures and fire drills. The goals include improving performance, learning from doing, and saving lives. Captain Chesley “Sully” Sullenberger attributed his successful emergency landing of US Airways flight 1549 in the Hudson River, after the plane lost both engines on takeoff, to the extensive drilling and rehearsal he had undergone in flight simulators.

In similar fashion, by practicing the implementation of incident response, business continuity and disaster recovery plans in a simulated cyberattack, board members and senior executives can gain a comprehensive understanding of how these attacks unfold, the range of potential impacts, and their individual roles during a response, including potential interaction with law enforcement, regulatory officials, shareholders, employees and customers. For this reason alone, such an exercise ought to be an essential part of any cybersecurity programme.

US Department of Homeland Security employees monitoring, tracking, and investigating cyber incidents

Image: REUTERS/Chris Morgan/Idaho National Laboratory

Learning by doing

The most effective way of learning is by doing. Think about kids learning to play soccer, for example. Studies by BCG and MIT have shown that the same theory applies to learning basic cybersecurity concepts. “Doing” via immersion in a simulated cyberattack gives executives working knowledge of the wide variety of cybersecurity concepts that they need to understand to properly support the cyber resilience of their organization.

Cybersecurity is a complex field. The first step is defining a standard syllabus of subjects that need to be covered, which can include liabilities, mandatory regulations, voluntary guidelines, common threats, assets, methods of protecting assets, risk management, methods of detecting intrusions, forensics, and other key capabilities. The second step is taking teams of executives and board members through immersive scenarios using interactive simulations in which the concepts of the syllabus come into play and the impact of board decisions on the organization’s P&L is modeled. For example, what are the liabilities to the company (and to the board members) if the company continues operations in the face of a known cyber breach? What systems and protections does the company have in place to redress a cyber incursion? What are the legal and regulatory (and good common-sense) requirements for notifying customers, shareholders, employees and other stakeholders?

In our exercises, participating executives may operate as a single collaborative team, or they may be divided into two or more teams, which compete to see which obtains a better score and finishes the exercise with the highest profits in their virtual P&L. Using such a hypothetical business case approach, the board and senior management learn cybersecurity concepts by experiencing them, and our research shows that they emerge with an excellent understanding of what otherwise seems like a daunting technical challenge.

Developing a cybersecurity strategy

Companies use laboratories to test products and processes before they are put into production. In a similar vein, table top exercises enable companies to test, evaluate and refine cybersecurity strategies, and in so doing, to convert ideas and invention to systematic and scientific discipline.

When executives are immersed in a properly constructed scenario, they see how cyber defenses they have built, or plan to build, actually perform, and the benefits that can be achieved by investments in further vulnerability prevention, attack detection, attack mitigation and recovery. By living through a simulation using the company’s own cybersecurity investment plan, the board and senior management can experiment first-hand the impact of each proposed investment, from training to technology. At the end of the exercise, they can consider changes, improvements – and whether a different cybersecurity investment plan might have provided a better outcome. For example, would a greater investment in multi-factor authentication, and/or advanced biometrics, have negated the attack? Would a larger investment in supply chain cybersecurity have made a difference? What would be the benefit of implementing a company-wide training programme over six months rather than over 18 months? The goal is tangible output from the workshop, including a roadmap of next steps and a set of action items that optimize investments for cyber defense.

These immersive exercises allow organizations to focus on how to plan and budget to maximize the business resiliency, including the cyber resiliency, of the company. Sometimes the best investments may be ones that reduce consequences of an attack, rather than trying to prevent the attack outright. A properly designed exercise enables board members and senior management to make more informed trade-offs and decisions on how to best invest in cyber resilience.

Handling cyberattacks is a company-wide concern. Building an effective cybersecurity strategy and culture is an essential competitive differentiator and business enabler. Culture starts with leadership, and leadership starts at the top. Through immersive table top exercises, leaders will gain understanding, and can now start to create in their organizations a culture of cyber resilience.

Share

Written by

Michael Coden, Head of the Cybersecurity Practice, BCG Platinion

Stuart Madnick, Professor, Massachusetts Institute of Technology (MIT)

Alex Pentland, Professor, Massachusetts Institute of Technology (MIT)

Shoaib Yousuf, Project Leader, The Boston Consulting Group

The views expressed in this article are those of the author alone and not the World Economic Forum.

November 30, 2016 | More

MIT’s beer game offers supply chain lessons (without the actual booze)

One of the highlights of MIT Sloan orientation week was playing the “beer game”, a role-play simulation that provides a glimpse into supply chain challenges that managers in the real world often face. Hosted by Professor John D Sterman, the game (sadly) involves no real consumption of beer, and instead simulates the production and distribution of beer from the manufacturer to the end customer.

The objective of the game is to meet customer demand for cases of beer through a four stage supply chain – the manufacturer, distributor, wholesaler and retailer – with minimal expenditure on inventory and backlogs. The challenge for the students managing each link in the chain is to fulfill incoming orders of beer by placing orders with the next upstream party, with inter-party communication and collaboration prohibited.

Only the retailer has true information about customer demand for beer and in turn sends an order for next week’s shipment up the supply chain. The manufacturer sitting at the top of the chain decides how much beer to brew, which takes time of course. The winner is the supply chain that achieved the lowest minimal operating costs throughout the game. During our orientation, there were nearly 50 supply chains with 400 people playing the game simultaneously, adding to the excitement.

An environment of confusion and chaos prevailed for the most part. Teams felt frustrated and helpless wondering whether the huge stockpiles and backlogs were caused by erratic customer demand or a lack of understanding of supply chain principles. During the discussion after the game, people did not hesitate to blame others on their own team, for amplifying or understating customer demand as the cause of the supply chain issues.

Originally created in the late 1950s by Jay Forrester, a computing pioneer, to research the dynamics of supply chains, today the beer game is used to illustrate the principles of complex systems. Prof Sterman, who has run the beer game nearly 200 times over the past three decades, stresses the bigger lessons and explains that we allow ourselves to become prisoners of the complex systems in which we are embedded, overreacting to events and blaming others for our problems.

He adds that sustaining excellence lies in redesigning the systems we have created, including their physical structure, incentives, and the mental models we hold about other people. “When we shift from blame to trust, from disdain to respect, we can harness the capabilities of all people to understand how our actions create our future,” says Prof Sterman. “Not just in the here and now, but globally and for the long term.”

Thus, a manager who fires his employees as a consequence of a bad decision is putting a Band-Aid on a problem that will resurface. The role of a leader, instead, is to create a system that empowers its members, providing an eco-system for them to thrive. But the biggest impediments to learning are the mental models through which we construct our understanding of reality. By blaming outside forces we deny ourselves the opportunity to learn.

The “beer game” exemplifies the MIT’s emphasis on education for practical application.

November 28, 2016 | More

Stop meditating alone–for productivity gains, it’s a team sport

You’ve probably heard that meditation increases focus, memory, and compassion, according to a range of studies. Yet only 8% of us do it. This number could get a boost soon, as several companies introduce the practice into group settings in the workplace.

A survey by Fidelity Investments and the National Business Group on Health predicts that 22% of Fortune 500 companies will use mindfulness or brain training at the workplace by the end of the year, as a way to improve employee health and productivity, decrease absenteeism, and enhance quality of life. And the survey suggests that this number could double in 2017.

Anders Ferguson, founding principal and partner at the wealth management firm Veris Wealth Partners, jumped on the mindfulness-at-work bandwagon three years ago when he wanted to enhance the work habits of his employees. Partnering with three other investment firms, Ferguson implemented a variety of mindfulness practices. They let employees decide whether or not they wanted to participate, and 100% of them do.

All meetings start with a minute of silence, basic mindfulness breathing, and meditation. Employees are also encouraged to perform daily acts of compassion and appreciation with the people in their work and personal life, as well as random acts of kindness for strangers. Additionally, they’re encouraged to put down their digital devices for at least an hour each day.

Ferguson says the technique seems simple, but the results have included an increase in productivity and a decrease in stress. “The way many of us work is not working,” he says. “Mental effectiveness has two fundamental rules: focus on what you choose, and choose your distractions mindfully.”

While meditation is often perceived as a solitary practice, experts say there are several reasons why it’s better done as a group.

It’s Easier To Focus

Meditation is about compassion and collective consciousness, rather than just reducing one’s own stress and anxiety, says Tara Swart, senior lecturer at MIT Sloan Executive Education. “Part of mindfulness meditation involves projecting feelings of forgiveness and compassion—both of which are, by their nature, targeted at third parties,” she says.

Meditating in a group may make it easier to focus on these objectives, increasing the ability to override unconscious biases and get the most out of the exercise, she says.

It Strengthens Your Sense of Community

Even focusing inward in total silence, there is a palpable sense of community, support, and connection that you feel when you meditate with others, says Micah Mortali, director of the Kripalu Schools of Yoga and Ayurveda at the Kripalu Center for Yoga and Health.

Jay Vidyarthi, head of user experience design on Meditation at Muse, suggests that beginner meditation practices include discussion periods where people share their experiences, which can be powerful.

“Very often, participants discover that their own experiences, whether positive or negative, are very common,” he says. “This not only validates their own feelings to help them feel at ease, but it also leads to a deep understanding of just how similar we all are.”

It Promotes Compassion

Meditation can help you practice being neutral, having compassion, and letting go of thoughts and judgments of others, says Mortali.

“Maybe your neighbor has gas, or makes a weird sound when they breathe,” he says. “You may notice yourself judging or feeling aversion. This is a positive experience as it provides an opportunity to practice how you show up off your cushion.”

It Provides Accountability

Meditation is more effective when it’s used on a consistent basis. Practicing as a group can make you more accountable to your commitment, says Jennice Vilhauer, director of the outpatient psychotherapy program at the Emory Clinic.

“You are more likely to show up and actually do it if others are expecting you to be there,” she says.

It Can Lead To Greater Calm

Oxytocin, a hormone that promotes bonding, is likely to be more in abundance in a situation where people can communicate and interact freely over a shared experience, says Swart.

“This lowers our guard, makes us warmer toward others and can induce a calmer state, as well as encouraging feelings of acceptance and belonging, rather than isolation,” she says.

Group Meditation Does Have Drawbacks

It’s not all sunshine and yogis, say the experts. While group meditation has many benefits, there are three reasons why you need to proceed with caution.

It Can Be Too Emotional

Not every person responds well to meditation as it puts you in connection with your inner emotional space, says Vilhauer. “For many people, especially anyone who has experienced traumatic past events, what’s stored there can be quite painful and overwhelming,” she says. “It can create a sudden release of emotion and it isn’t uncommon for people to start crying uncontrollably, to feel very uncomfortable physical sensations, or to develop a sense of depersonalization where they feel detached from their bodies.”

This can lead to embarrassment, shame, or even re-traumatization in a group setting.

It Can Be Uncomfortable

For those who are new to the practice, meditation can make you drowsy and even cause some to fall asleep,” says Swart.

“Involuntary actions like this can make some people embarrassed and self-aware,” she says. “If they come into a group session wary about what others will think of them, they are less likely to relax and feel the benefits of the exercise.”

This can be particularly true of men, who bear the burden of more of a social stigma around mindfulness and mental health, says Swart.

It Can Lead To Reliance on a Group

It is important to be able to discover wholeness being alone, says Melissa Kauffmann, creator of the mindfulness program at Creativity Challenge Community, an elementary school in Denver.

“When you are focused, alone, and free from distraction, you can think deeply and engage in the art of meditating,” she says, adding, “Meditating in a group can be a crutch from truly finding a connection.”

Her advice for a balanced meditation practice: “Enjoy meditation solo and in a group setting.”

November 28, 2016 | More

Engineering

New resource for optical chips

The Semiconductor Industry Association has estimated that at current rates of increase, computers’ energy requirements will exceed the world’s total power output by 2040.

Using light rather than electricity to move data would dramatically reduce computer chips’ energy consumption, and the past 20 years have seen remarkable progress in the development of silicon photonics, or optical devices that are made from silicon so they can easily be integrated with electronics on silicon chips.

But existing silicon-photonic devices rely on different physical mechanisms than the high-end optoelectronic components in telecommunications networks do. The telecom devices exploit so-called second-order nonlinearities, which make optical signal processing more efficient and reliable.

In the latest issue of Nature Photonics, MIT researchers present a practical way to introduce second-order nonlinearities into silicon photonics. They also report prototypes of two different silicon devices that exploit those nonlinearities: a modulator, which encodes data onto an optical beam, and a frequency doubler, a component vital to the development of lasers that can be precisely tuned to a range of different frequencies.

In optics, a linear system is one whose outputs are always at the same frequencies as its inputs. So a frequency doubler, for instance, is an inherently nonlinear device.

“We now have the ability to have a second-order nonlinearity in silicon, and this is the first real demonstration of that,” says Michael Watts, an associate professor of electrical engineering and computer science at MIT and senior author on the new paper.

“Now you can build a phase modulator that is not dependent on the free-carrier effect in silicon. The benefit there is that the free-carrier effect in silicon always has a phase and amplitude coupling. So whenever you change the carrier concentration, you’re changing both the phase and the amplitude of the wave that’s passing through it. With second-order nonlinearity, you break that coupling, so you can have a pure phase modulator. That’s important for a lot of applications. Certainly in the communications realm that’s important.”

The first author on the new paper is Erman Timurdogan, who completed his PhD at MIT last year and is now at the silicon-photonics company Analog Photonics. He and Watts are joined by Matthew Byrd, an MIT graduate student in electrical engineering and computer science, and Christopher Poulton, who did his master’s in Watts’s group and is also now at Analog Photonics.

Dopey solutions

If an electromagnetic wave can be thought of as a pattern of regular up-and-down squiggles, a digital modulator perturbs that pattern in fixed ways to represent strings of zeroes and ones. In a silicon modulator, the path that the light wave takes is defined by a waveguide, which is rather like a rail that runs along the top of the modulator.

Existing silicon modulators are doped, meaning they have had impurities added to them through a standard process used in transistor manufacturing. Some doping materials yield p-type silicon, where the “p” is for “positive,” and some yield n-type silicon, where the “n” is for “negative.” In the presence of an electric field, free carriers — electrons that are not associated with particular silicon atoms — tend to concentrate in n-type silicon and to dissipate in p-type silicon.

A conventional silicon modulator is half p-type and half n-type silicon; even the waveguide is split right down the middle. On either side of the waveguide are electrodes, and changing the voltage across the modulator alternately concentrates and dissipates free carriers in the waveguide, to modulate an optical signal passing through.

The MIT researchers’ device is similar, except that the center of the modulator — including the waveguide that runs along its top — is undoped. When a voltage is applied, the free carriers don’t collect in the center of the device; instead, they build up at the boundary between the n-type silicon and the undoped silicon. A corresponding positive charge builds up at the boundary with the p-type silicon, producing an electric field, which is what modulates the optical signal.

Because the free carriers at the center of a conventional silicon modulator can absorb light particles — or photons — traveling through the waveguide, they diminish the strength of the optical signal; modulators that exploit second-order nonlinearities don’t face that problem.

Picking up speed

In principle, they can also modulate a signal more rapidly than existing silicon modulators do. That’s because it takes more time to move free carriers into and out of the waveguide than it does to concentrate and release them at the boundaries with the undoped silicon. The current paper simply reports the phenomenon of nonlinear modulation, but Timurdogan says that the team has since tested prototypes of a modulator whose speeds are competitive with those of the nonlinear modulators found in telecom networks.

The frequency doubler that the researchers demonstrated has a similar design, except that the regions of p- and n-doped silicon that flank the central region of undoped silicon are arranged in regularly spaced bands, perpendicular to the waveguide. The distances between the bands are calibrated to a specific wavelength of light, and when a voltage is applied across them, they double the frequency of the optical signal passing through the waveguide, combining pairs of photons into single photons with twice the energy.

Frequency doublers can be used to build extraordinarily precise on-chip optical clocks, optical amplifiers, and sources of terahertz radiation, which has promising security applications.

“Silicon has had a huge renaissance within the optical communication space for a variety of applications,” says Jason Orcutt, a researcher in the Physical Sciences Department at IBM’s Thomas J. Watson Research Center. “However, there are still remaining application spaces — from microwave photonics to quantum optics — where the lack of second-order nonlinear effects in silicon has prevented progress. This is an important step towards addressing a wider range of applications within the mature silicon-photonics platforms around the world.”

“To date, efforts to achieve second-order nonlinear effects in silicon have focused on hard material-science problems,” Orcutt adds. “The [MIT] team has been extremely clever by reminding the physics community what we shouldn’t have forgotten. Applying a simple electric field creates the same basic crystal polarization vector that other researchers have worked hard to create by far more complicated means.”


February 20, 2017 | More

Advanced silicon solar cells

As the world transitions to a low-carbon energy future, near-term, large-scale deployment of solar power will be critical to mitigating climate change by midcentury. Climate scientists estimate that the world will need 10 terawatts (TW) or more of solar power by 2030 — at least 50 times the level deployed today. At the MIT Photovoltaics Research Laboratory (PVLab), teams are working both to define what’s needed to get there and to help make it happen. “Our job is to figure out how to reach a minimum of 10 TW in an economically and environmentally sustainable way through technology innovation,” says Tonio Buonassisi, associate professor of mechanical engineering and lab director.

Their analyses outline a daunting challenge. First they calculated the growth rate of solar required to achieve 10 TW by 2030 and the minimum sustainable price that would elicit that growth without help from subsidies. Current technology is clearly not up to the task. “It would take between $1 trillion and $4 trillion of additional debt to just push current technology into the marketplace to do the job, and that’d be hard,” says Buonassisi. So what needs to change?

Using models that combine techno­logical and economic variables, the researchers determined that three changes are required: reduce the cost of modules by 50 percent, increase the conversion efficiency of modules (the fraction of solar energy they convert into electricity) by 50 percent, and decrease the cost of building new factories by 70 percent. Getting all of that to happen quickly enough — within five years — will require near-term policies to incentivize deployment plus a major push on technological innovation to reduce costs so that government support can decrease over time.

Making strides on efficiency

Major gains are already being made on the conversion efficiency front — both at the MIT PVLab and around the world. One especially promising technology is the passivated emitter and rear cell (PERC), which is based on low-cost crystalline silicon but has a special “architecture” that captures more of the sun’s energy than conventional silicon cells do. While costs must be brought down, the technology promises to bring a 7 percent increase in efficiency, and many experts predict its widespread adoption.

But there’s been a problem. In field tests, some modules containing PERC solar cells have degraded in the sun, with conversion efficiency dropping by fully 10 percent in the first three months. “These modules are supposed to last 25 years, and within just weeks to months they’re generating only 90 percent as much electricity as they’re designed for,” says Ashley Morishige, postdoc in mechanical engineering. That behavior is perplexing because manufacturers thoroughly test the efficiency of their products before releasing them. In addition, not all modules exhibit the problem, and not all companies encounter it. Interestingly, it took up to a few years before individual companies realized that other companies were having the same problem. Manufacturers came up with a variety of engineering solutions to deal with it, but its exact cause remained unknown, prompting concern that it could recur at any time and could affect next-generation cell architectures.

To Buonassisi, it seemed like an opportunity. His lab generally focuses on basic materials problems at the wafer and cell level, but the researchers could equally well apply their equipment and expertise to modules and systems. By defining the problem, they could support the adoption of this energy-efficient technology, helping to bring down materials and labor costs for each watt of power generated.

Working closely with an industrial solar cell manufacturer, the MIT team undertook a “root-cause analysis” to define the source of the problem. The company had come to them for help with the unexpected degradation of their PERC modules and reported some odd trends. PERC modules stored in sunlight for 60 days with their wires connected into a closed loop lost no more efficiency than conventional solar cells typically do during their break-in period. But modules stored in sunlight with open circuits degraded significantly more. In addition, modules made from different silicon ingots displayed different power-loss behavior. And, as shown in Figure 1 in the slideshow above, the drop in efficiency was markedly higher in modules made with cells that had been fabricated at a peak temperature of 960 degrees Celsius than in those containing cells fired at 860 C.

Subatomic misbehavior

Understanding how defects can affect conversion efficiency requires understanding how solar cells work at a fundamental level. Within a photoreactive material such as silicon, electrons exist at two distinct energy levels. At the lower level, they’re in the “valence band” and can’t flow; at the higher level, they’re in the “conduction band” and are free to move. When solar radiation shines onto the material, electrons can absorb enough energy to jump from the valance band to the conduction band, leaving behind vacancies called holes. If all is well, before the electrons lose that extra energy and drop back to the valence band, they travel through an external circuit as electric current.

Generally, an electron or hole has to gain or lose a set amount of energy to move from one band to the other. (Although holes are defined as the absence of electrons, physicists view both electrons and holes as “moving” within semiconductors.) But sometimes a metal impurity or a structural flaw in the silicon provides an energy “state” between the valence and conduction bands, enabling electrons and holes to jump to that intermediate energy level — a move achieved with less energy gain or loss. If an electron and hole both make the move, they can recombine, and the electron is no longer available to pass through the external circuit. Power output is lost.

The PVLab researchers quantify that behavior using a measure called lifetime — the average time an electron remains in an excited state before it recombines with a hole. Lifetime critically affects the energy conversion efficiency of a solar cell, and it is “exquisitely sensitive to the presence of defects,” says Buonassisi.

To measure lifetime, the team — led by Morishige and mechanical engineering graduate student Mallory Jensen — uses a technique called lifetime spectroscopy. It involves shining light on a sample or heating it up and monitoring electrical conductivity during and immediately afterward. When current flow goes up, electrons excited by the added energy have jumped into the conduction band. When current drops, they’ve lost that extra energy and fallen back into the valence band. Changes in conductivity over time thus indicate the average lifetime of electrons in the sample.

Locating and characterizing the defect

To address the performance problems with PERC solar cells, the researchers first needed to figure out where in the modules the primary defects were located. Possibilities included the silicon surface, the aluminum backing, and various interfaces between materials. But the MIT team thought it was likely to be in the bulk silicon itself.

To test that assumption, they used partially fabricated solar cells that had been fired at 750 C or at 950 C and — in each category — one that had been exposed to light and one that had been kept in the dark. They chemically removed the top and bottom layers from each cell, leaving only the bare silicon wafer. They then measured the electron lifetime of all the samples. As shown in Figure 2 in the slideshow above, with the low-temperature pair, lifetime is about the same in the light-exposed and unexposed samples. But with the high-temperature pair, lifetime in the exposed sample is significantly lower than that in the unexposed sample.

Those findings confirm that the observed degradation is largely attributable to defects that are present in the bulk silicon and — when exposed to light — affect lifetime, thus conversion efficiency, in cells that have been fired at higher temperatures. In follow-up tests, the researchers found that by reheating the degraded samples at 200 C for just an hour, they could bring the lifetime back up — but it dropped back down with re-exposure to light.

So how do those defects interfere with conversion efficiency, and what types of contaminants might be involved in their formation? Two characteristics of the defects would help the researchers answer those questions. First is the energy level of the defect — where it falls between the valence and conduction bands. Second is the “capture cross section,” that is, the area over which a defect at a particular location can capture electrons and holes. (The area might be different for electrons than for holes.)

While those characteristics can’t easily be measured directly in the samples, the researchers could use a standard set of equations to infer them based on lifetime measurements taken at different illumination intensities and test temperatures. Using samples that had been fired at 950 C and then exposed to light, they ran lifetime spectroscopy experiments under varying test conditions. With the gathered data, they calculated the energy level and capture cross section of the primary defect causing recombination in their samples. They then consulted the literature to see what elements are known to exhibit those characteristics, making them likely candidates for causing the drop in conversion efficiency observed in their samples.

According to Morishige, the team has narrowed down the list of candidates to a handful of possibilities. “And at least one of them is consistent with much of what we’ve observed,” she says. In this case, a metal contaminant creates defects in the crystal lattice of the silicon during fabrication. Hydrogen atoms that are present combine with those metal atoms, making them electrically neutral so they don’t serve as sites for electron-hole recombination. But under some conditions — notably, when the density of electrons is high — the hydrogen atoms dissociate from the metal, and the defects become very recombination-active.

That explanation fits with the com­pany’s initial reports on their modules. Cells fired at higher temperatures would be more susceptible to light-induced damage because the silicon in them typically contains more impurities and less hydrogen. And performance would vary from ingot to ingot because different batches of silicon contain different concentrations of contaminants as well as hydrogen. Finally, baking the silicon at 200 C — as the researchers did — could cause the hydrogen atoms to recombine with the metal, neutralizing the defects.

Based on that possible mechanism, the researchers offer manufacturers two recommendations. First, try to adjust their manufacturing processes so that they can perform the firing step at a lower temperature. And second, make sure that their silicon has sufficiently low concentrations of certain metals that the researchers have pinpointed as likely sources of the problem.

Unintended consequences

The bottom line, observes Buonassisi, is that the very feature that makes the PERC technology efficient — the special architecture designed to capture solar energy efficiently — is what reveals a problem inherent in the fabricated material. “The cell people did everything right,” he says. “It’s the quintessential law of unintended consequences.” And if the problem is the higher density of excited electrons interacting with defects in the silicon wafer, then developing effective strategies for dealing with it will only get more important because next-generation device designs and decreasing wafer thicknesses will bring even higher electron densities.

To Buonassisi, this work demonstrates the importance of talking across boundaries. He advocates communication among all participants in the solar community — both private companies and research organizations — as well as collaboration among experts in every area — from feedstock materials to wafers, cells, and modules to system integration and module installation. “Our laboratory is taking active steps to bring together a community of stakeholders and create a vertically integrated R&D platform that I hope will enable us to more quickly address the technical challenges and help lead to 10 TW of PV by 2030,” he says.

This research was funded by the National Science Foundation, the U.S. Department of Energy, and the National Research Foundation Singapore through the Singapore-MIT Alliance for Research and Technology.


February 17, 2017 | More

Microbial manufacturing

Using advanced fermentation technology, industrial biotech startup Manus Bio hopes to make manufacturing flavors, fragrances, and other products greener and more cost-effective — and maybe create new products in the process.

The MIT spinout has created a low-cost process for engineering microbes with complex metabolic pathways borrowed from plants, which can produce an array of rare and expensive ingredients used to manufacture noncaloric beverages, perfumes, toothpastes, detergents, pesticides, and even therapeutics, among other products. Moreover, the reprogrammed microbes allow for more control in identifying and extracting compounds along the metabolic pathway, which could lead to discoveries of new compound ingredients.

Most recently, Manus has recreated a natural plant process in microbes to cheaply produce mass quantities of a coveted stevia plant compound for a zero-calorie sweetener, called Rebaudioside M (Reb M), that’s noted for being much sweeter than today’s commercial alternatives. In nature, only .01 percent of the compound can be extracted from the stevia plant, so companies extract a more abundant but more bitter compound.

Manus, on the other hand, has engineered bacteria to mimic the stevia plant’s metabolic pathway. When put through the startup’s fermentation process, the bacteria produced the compound at greater than 95 percent purity.

Production of the new sweetener demonstrates how Manus’ microbial engineering can be used to make more refined flavors and other products more cost effectively, says MIT professor Gregory Stephanopoulos, who co-founded the startup and co-invented the core technology with former postdoc and current Manus CEO Ajikumar Parayil. On average, Manus’ process is about one-tenth the cost of any plant-extraction method and significantly reduces use of land resources.

“If you take the original compound from the stevia plant, it has a metallic taste. But if you isolate the components of the metabolic pathway and find individual compounds, then you end up with the product of the highest interest,” says Stephanopoulos, who serves as a scientific advisor to the startup.

Manus’ commercial fermentation process involves engineering microbes with plant metabolic pathways, and placing them into large-scale fermentors with inexpensive sugars to feed on. While fermenting, the microbes produce large amounts of the ingredients that can be extracted with commercial processes. Manus plans to scale up to commercial levels this year and sell the products to their industrial partners.

Another product in Manus’ pipeline is a rare compound called nootkatone, a key component found in grapefruit that is used as an environmentally friendly insect repellent. It currently costs several thousand dollars per kilogram to produce through traditional methods. But, produced more cheaply and in greater qualities, it could be used, for instance, as an environmentally safe way to help fight Lyme disease, malaria, zika virus, and other insect-borne pathogens.

Not just “slapping genes together”

Fermenting engineered microbes to produce certain compounds has become more commonplace in recent years. But the key to Manus’ process is engineering the pathway such that it can produce sufficient quantities of those compounds to be commercially interesting, Stephanopoulos says. “Slapping genes together to make a product is fine, but this doesn’t give you a platform for producing something economically,” he says. “There’s a big jump between making a few milligrams of a compound and a few grams, which is what you need to make it commercially viable.”

The core technology traces back to novel work Stephanopoulos and Parayil began at MIT. In the mid-2000s, the two researchers modified the complex metabolic pathway in bacteria that produces isoprenoids, a diverse group of more than 60,000 molecules that are used to make many products, including therapeutics. Tweaking that pathway for commercial purposes has been done before, “but we paid special attention to the amount of product produced,” Stephanopoulos says.

In 2010, Stephanopoulos, Parayil, and other MIT researchers published their first paper on the work in Science. In it, they describe engineering microbes with a metabolic pathway of 17 complex intermediate steps that could produce large quantities of critical intermediate compounds of the cancer drug, Taxol, originally extracted from Pacific yew tree bark. To do so, the researchers added enzymes and plant genes to the pathway, which helped catalyze the intermediate steps and eliminate bottlenecks that slowed the pathway. Doing so increased production of the compounds 1,000 times over traditional microbe-engineering methods.

A major feature of the paper, Parayil says, was using enzymes to cut the linear pathway into a network of separate, distinct modules that can be more easily controlled and modified — a process referred to as multivariate modular metabolic engineering (MMME). “Basically, the core concept was simplifying the biology for engineering,” he says.

Around the same time, a researcher from a company in the flavor and fragrance industry was visiting MIT through the Industrial Liaison Program (ILP) to learn about current innovations. After meeting with Stephanopoulos and Parayil, the representative persuaded her company to fund further development of the technology. In 2012, the two researchers launched Manus’ lab in Cambridge to commercialize the technology.

Stephanopoulos points to this initial industrial collaboration, facilitated through the ILP, as a stepping stone in Manus’ success. Apart from funding, the unnamed company provided insights about manufacturing and getting other companies to buy into the idea of novel technologies. “That was one of our competitive advantages. We learned a lot by working with this company from day one,” Stephanopoulos says.

On his end, Parayil brought the business idea to MIT’s Innovation Teams (i-Teams) — where students from across disciplines flesh out strategies for turning lab technologies into commercial products — and to the Martin Trust Center for MIT Entrepreneurship, the ILP, and classes such as 15.366 (Energy Ventures), which helped him refine a business plan and contact customers, among other things. “Those were unique experiences that showed how to translate technology from the lab to market,” Parayil says.

Pathway for new discoveries

Today, Manus’ technology has been verified in a number of academic publications, including in Science and PNAS. Along with MMME, the process now incorporates pathway integrated protein engineering, which uses design tools to enable fast and efficient enzyme engineering, and integrated multivariate omics analysis, a suite of analytics tools to uncover bottlenecks in metabolic pathways.

Apart from cutting costs and use of land resources, the technology also represents a platform “that can aid in the discovery of new molecules,” Parayil says. In nature, for instance, a compound extracted from a plant represents the end product of long, complex metabolic processes with many intermediate steps. Currently, there’s no way to discover all the compounds produced along the way.

Manus, however, can monitor the entire metabolic pathway and identify, tweak, and potentially extract previously untested compounds produced at any stage. In doing so, “you multiply incredibly the number of chemicals that may have very important properties as, say, pharmaceuticals, flavors, and pesticides,” Stephanopoulos says. But that’s further down the road, he adds.

This year is “particularly critical” for Manus, Stephanopoulos says. The Cambridge-based startup is currently ramping up production for commercialization of the sweetener and other products. “If Manus demonstrates the ability … to produce compounds at commercial scale, it will seal the credibility of company as a serious contender in the area of biotechnology and … fragrance, flavor, and sweetener manufacturing,” he says.


February 3, 2017 | More

Professor Tom Leighton and Danny Lewin SM ’98 named to National Inventors Hall of Fame

Is the Internet old or new? According to MIT professor of mathematics Tom Leighton, co-founder of Akamai, the internet is just getting started. His opinion counts since his firm, launched in 1998 with pivotal help from Danny Lewin SM ’98, keeps the internet speedy by copying and channeling massive amounts of data into orderly and secure places that are quick to access. Now, the National Inventors Hall of Fame (NIHF) has recognized Leighton and Lewin’s work, naming them both as 2017 inductees.

“We think about the internet and the tremendous accomplishments that have been made and, the exciting thing is, it’s in its infancy,” Leighton says in an Akamai video. Online commerce, which has grown rapidly and is now denting mall sales, has huge potential, especially as dual screen use grows. Soon mobile devices will link to television, and then viewers can change channels on their mobile phones and click to buy the cool sunglasses Tom Cruise is wearing on the big screen. “We are going to see [that] things we never thought about existing will be core to our lives within 10 years, using the internet,” Leighton says.

Leighton’s former collaborator, Danny Lewin, was pivotal to the early development of Akamai’s technology. Tragically, Lewin died as a passenger on an American Airlines flight that was hijacked by terrorists and crashed into New York’s World Trade Center on Sept. 11, 2001. Lewin, a former Israeli Defense Forces officer, is credited with trying to stop the attack.

According to Akami, Leighton, Lewin, and their team “developed the mathematical algorithms necessary to intelligently route and replicate content over a large network of distributed servers,” which solved congestion that was then becoming known as the “World Wide Wait.” Today the company delivers nearly 3 trillion internet interactions each day.

The NIHF describes Leighton and Lewin’s contributions as pivotal to making the web fast, secure, and reliable. Their tools were applied mathematics and algorithms, and they focused on congested nodes identified by Tim Berners-Lee, inventor of the World Wide Web and an MIT professor with an office near Leighton. Leighton, an authority on parallel algorithms for network applications who earned his PhD at MIT, holds more than 40 U.S. patents involving content delivery, internet protocols, algorithms for networks, cryptography, and digital rights management. He served as Akamai’s chief scientist for 14 years before becoming chief executive officer in 2013.

Lewin, an MIT doctoral candidate at the time of his death, served as Akamai’s chief technology officer and was an award-winning computer scientist whose master’s thesis included some of the fundamental algorithms that make up the core of Akamai’s services. Before coming to MIT, Lewin worked at IBM’s research laboratory in Haifa, Israel, where he developed the company’s Genesys system, a processor verification tool. He is named on 25 U.S. patents.

“It is a special honor to be listed among so many groundbreaking innovators in the National Inventors Hall of Fame,” says Leighton. “And I am very grateful to Akamai’s employees for all their hard work over the last two decades to turn a dream for making the Internet be fast, reliable, and secure, into a reality.”

The 2017 National Inventors Hall of Fame induction ceremony will take place on May 4 in Washington.


February 2, 2017 | More

Wearable AI system can detect a conversation’s tone

It’s a fact of nature that a single conversation can be interpreted in very different ways. For people with anxiety or conditions such as Asperger’s, this can make social situations extremely stressful. But what if there was a more objective way to measure and understand our interactions?

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute of Medical Engineering and Science (IMES) say that they’ve gotten closer to a potential solution: an artificially intelligent, wearable system that can predict if a conversation is happy, sad, or neutral based on a person’s speech patterns and vitals.

“Imagine if, at the end of a conversation, you could rewind it and see the moments when the people around you felt the most anxious,” says graduate student Tuka Alhanai, who co-authored a related paper with PhD candidate Mohammad Ghassemi that they will present at next week’s Association for the Advancement of Artificial Intelligence (AAAI) conference in San Francisco. “Our work is a step in this direction, suggesting that we may not be that far away from a world where people can have an AI social coach right in their pocket.”

As a participant tells a story, the system can analyze audio, text transcriptions, and physiological signals to determine the overall tone of the story with 83 percent accuracy. Using deep-learning techniques, the system can also provide a “sentiment score” for specific five-second intervals within a conversation.

“As far as we know, this is the first experiment that collects both physical data and speech data in a passive but robust way, even while subjects are having natural, unstructured interactions,” says Ghassemi. “Our results show that it’s possible to classify the emotional tone of conversations in real-time.”

The researchers say that the system’s performance would be further improved by having multiple people in a conversation use it on their smartwatches, creating more data to be analyzed by their algorithms. The team is keen to point out that they developed the system with privacy strongly in mind: The algorithm runs locally on a user’s device as a way of protecting personal information. (Alhanai says that a consumer version would obviously need clear protocols for getting consent from the people involved in the conversations.)

How it works

Many emotion-detection studies show participants “happy” and “sad” videos, or ask them to artificially act out specific emotive states. But in an effort to elicit more organic emotions, the team instead asked subjects to tell a happy or sad story of their own choosing.

Subjects wore a Samsung Simband, a research device that captures high-resolution physiological waveforms to measure features such as movement, heart rate, blood pressure, blood flow, and skin temperature. The system also captured audio data and text transcripts to analyze the speaker’s tone, pitch, energy, and vocabulary.

“The team’s usage of consumer market devices for collecting physiological data and speech data shows how close we are to having such tools in everyday devices,” says Björn Schuller, professor and chair of Complex and Intelligent Systems at the University of Passau in Germany, who was not involved in the research. “Technology could soon feel much more emotionally intelligent, or even ‘emotional’ itself.”

After capturing 31 different conversations of several minutes each, the team trained two algorithms on the data: One classified the overall nature of a conversation as either happy or sad, while the second classified each five-second block of every conversation as positive, negative, or neutral.

Alhanai notes that, in traditional neural networks, all features about the data are provided to the algorithm at the base of the network. In contrast, her team found that they could improve performance by organizing different features at the various layers of the network.

“The system picks up on how, for example, the sentiment in the text transcription was more abstract than the raw accelerometer data,” says Alhanai. “It’s quite remarkable that a machine could approximate how we humans perceive these interactions, without significant input from us as researchers.”

Results

Indeed, the algorithm’s findings align well with what we humans might expect to observe. For instance, long pauses and monotonous vocal tones were associated with sadder stories, while more energetic, varied speech patterns were associated with happier ones. In terms of body language, sadder stories were also strongly associated with increased fidgeting and cardiovascular activity, as well as certain postures like putting one’s hands on one’s face.

On average, the model could classify the mood of each five-second interval with an accuracy that was approximately 18 percent above chance, and a full 7.5 percent better than existing approaches.

The algorithm is not yet reliable enough to be deployed for social coaching, but Alhanai says that they are actively working toward that goal. For future work the team plans to collect data on a much larger scale, potentially using commercial devices such as the Apple Watch that would allow them to more easily implement the system out in the world.

“Our next step is to improve the algorithm’s emotional granularity so that it is more accurate at calling out boring, tense, and excited moments, rather than just labeling interactions as ‘positive’ or ‘negative,’” says Alhanai. “Developing technology that can take the pulse of human emotions has the potential to dramatically improve how we communicate with each other.”

This research was made possible in part by the Samsung Strategy and Innovation Center.


February 1, 2017 | More

Transparent, gel-based robots can catch and release live fish

Engineers at MIT have fabricated transparent, gel-based robots that move when water is pumped in and out of them. The bots can perform a number of fast, forceful tasks, including kicking a ball underwater, and grabbing and releasing a live fish.

The robots are made entirely of hydrogel — a tough, rubbery, nearly transparent material that’s composed mostly of water. Each robot is an assemblage of hollow, precisely designed hydrogel structures, connected to rubbery tubes. When the researchers pump water into the hydrogel robots, the structures quickly inflate in orientations that enable the bots to curl up or stretch out.

The team fashioned several hydrogel robots, including a finlike structure that flaps back and forth, an articulated appendage that makes kicking motions, and a soft, hand-shaped robot that can squeeze and relax.

Because the robots are both powered by and made almost entirely of water, they have similar visual and acoustic properties to water. The researchers propose that these robots, if designed for underwater applications, may be virtually invisible.

Engineers at MIT have fabricated transparent gel robots that can perform a number of fast, forceful tasks, including kicking a ball underwater, and grabbing and releasing a live fish

Video: Melanie Gonick/MIT

The group, led by Xuanhe Zhao, associate professor of mechanical engineering and civil and environmental engineering at MIT, and graduate student Hyunwoo Yuk, is currently looking to adapt hydrogel robots for medical applications.

“Hydrogels are soft, wet, biocompatible, and can form more friendly interfaces with human organs,” Zhao says. “We are actively collaborating with medical groups to translate this system into soft manipulators such as hydrogel ‘hands,’ which could potentially apply more gentle manipulations to tissues and organs in surgical operations.”

Zhao and Yuk have published their results this week in the journal Nature Communications. Their co-authors include MIT graduate students Shaoting Lin and Chu Ma, postdoc Mahdi Takaffoli, and associate professor of mechanical engineering Nicholas X. Fang.

Robot recipe

For the past five years, Zhao’s group has been developing “recipes” for hydrogels, mixing solutions of polymers and water, and using techniques they invented to fabricate tough yet highly stretchable materials. They have also developed ways to glue these hydrogels to various surfaces such as glass, metal, ceramic, and rubber, creating extremely strong bonds that resist peeling.

The team realized that such durable, flexible, strongly bondable hydrogels might be ideal materials for use in soft robotics. Many groups have designed soft robots from rubbers like silicones, but Zhao points out that such materials are not as biocompatible as hydrogels. As hydrogels are mostly composed of water, he says, they are naturally safer to use in a biomedical setting. And while others have attempted to fashion robots out of hydrogels, their solutions have resulted in brittle, relatively inflexible materials that crack or burst with repeated use.

In contrast, Zhao’s group found its formulations leant themselves well to soft robotics.

“We didn’t think of this kind of [soft robotics] project initially, but realized maybe our expertise can be crucial to translating these jellies as robust actuators and robotic structures,” Yuk says.

Fast and forceful

To apply their hydrogel materials to soft robotics, the researchers first looked to the animal world. They concentrated in particular on leptocephali, or glass eels — tiny, transparent, hydrogel-like eel larvae that hatch in the ocean and eventually migrate to their natural river habitats.

“It is extremely long travel, and there is no means of protection,” Yuk says. “It seems they tried to evolve into a transparent form as an efficient camouflage tactic. And we wanted to achieve a similar level of transparency, force, and speed.”

To do so, Yuk and Zhao used 3-D printing and laser cutting techniques to print their hydrogel recipes into robotic structures and other hollow units, which they bonded to small, rubbery tubes that are connected to external pumps.

To actuate, or move, the structures, the team used syringe pumps to inject water through the hollow structures, enabling them to quickly curl or stretch, depending on the overall configuration of the robots.

Yuk and Zhao found that by pumping water in, they could produce fast, forceful reactions, enabling a hydrogel robot to generate a few newtons of force in one second. For perspective, other researchers have activated similar hydrogel robots by simple osmosis, letting water naturally seep into structures — a slow process that creates millinewton forces over several minutes or hours.

Catch and release

In experiments using several hydrogel robot designs, the team found the structures were able to withstand repeated use of up to 1,000 cycles without rupturing or tearing. They also found that each design, placed underwater against colored backgrounds, appeared almost entirely camouflaged. The group measured the acoustic and optical properties of the hydrogel robots, and found them to be nearly equal to that of water, unlike rubber and other commonly used materials in soft robotics.

In a striking demonstration of the technology, the team fabricated a hand-like robotic gripper and pumped water in and out of its “fingers” to make the hand open and close. The researchers submerged the gripper in a tank with a goldfish and showed that as the fish swam past, the gripper was strong and fast enough to close around the fish.

“[The robot] is almost transparent, very hard to see,” Zhao says. “When you release the fish, it’s quite happy because [the robot] is soft and doesn’t damage the fish. Imagine a hard robotic hand would probably squash the fish.”

Next, the researchers plan to identify specific applications for hydrogel robotics, as well as tailor their recipes to particular uses. For example, medical applications might not require completely transparent structures, while other applications may need certain parts of a robot to be stiffer than others.

“We want to pinpoint a realistic application and optimize the material to achieve something impactful,” Yuk says. “To our best knowledge, this is the first demonstration of hydrogel pressure-based acutuation. We are now tossing this concept out as an open question, to say, ‘Let’s play with this.’”

This research was supported, in part, by the Office of Naval Research, the MIT Institute for Soldier Nanotechnologies, and the National Science Foundation.


February 1, 2017 | More

Zeroing in on the chemistry of the air

We breathe it in and out every few seconds, yet the air that surrounds us has chemical activity and variations in its composition that are remarkably complex. Teasing out the mysterious behavior of the atmosphere’s constituents, including pollutants that may be present in tiny amounts but have big impacts, has been the driving goal of Jesse Kroll’s research.

Kroll, an associate professor of civil and environmental engineering and of chemical engineering who earned tenure last year, has been especially focused on studying the role of organic compounds in the air. These carbon-containing compounds include natural emissions from plants as well as products of combustion — everything from gaseous emissions that come from fuel burning in internal combustion engines, to components of soot and other particulate matter that arise from forest fires and other open flames. Such particles are smaller than a micron in diameter but can have outsized environmental effects.

“If you inhale them, they can cause adverse health effects, and they also can affect the Earth’s climate by affecting the amount of sunlight that comes through,” Kroll says.

However, a large fraction of organic particulate matter is not directly emitted into the atmosphere, but instead is formed within the atmosphere from oxidation reactions of gaseous organic species. Understanding such chemical transformations and their effects on atmospheric composition is a daunting task.

“It’s not just that there are a lot of different compounds,” Kroll explains. “Once in the atmosphere, they oxidize, and each one can form 10 or 100 more chemical products, which in turn can form many others. It’s a deeply complex system, so from a chemist’s perspective, it’s a really fascinating field.”

Analyzing these processes requires both detailed sampling and testing out in the field, and complex laboratory experiments that reveal the sequence of changes these chemicals go through once they enter the atmosphere.

Kroll originally hails from Austin, Texas, where his father was a professor of archaeology and classics at the University of Texas. He started to develop an interest in chemistry while in high school. “I figured out that chemistry was really something that grabbed me, because it could be related to something very tangible in the real world,” he recalls.

He moved to the Boston area for college, where he completed his undergraduate studies at Harvard University, majoring in chemistry and earth and planetary sciences. In a freshman environmental chemistry class, he says, “I got to study environmental chemistry, and I knew that was something that I wanted to work on. It was complex but tractable.” He went on to earn his PhD in chemistry there and then moved on to a postdoc position at Caltech, where he spent three years.

Next he moved into industry, taking a job at Aerodyne Research in Billerica, Massachusetts, where he worked on developing instruments for measuring atmospheric chemistry — some of which he still uses in his research. Then, in 2009, he joined the MIT faculty.

He says that with organic aerosols in the atmosphere, “there are so many different reactions and so many different molecules involved, we can’t hope to measure them all.” In addition, the mix of chemicals varies greatly from one region to another. So part of the challenge for atmospheric chemists is to decide how to narrow the problem and which compounds to focus on as being most relevant to both health and environmental effects.

“We try to strike a balance between having an accurate-enough description of this chemistry, but in a simple enough form to be useful for modelers and ultimately policymakers,” he says.

Most of Kroll’s work is in the laboratory, where individual chemical compounds can be introduced into reactors, varying from small flow tubes to sealed chambers the size of a small room, and oxidized under controlled conditions. He and his team then withdraw samples in real-time from those reactors, to make precise measurements of the evolving chemistry within.

But it’s not all local lab work. Kroll and his students also participate in large, multi-institution field studies, including ground-based atmospheric measurements in California, Alabama, and Colorado, and large-scale lab projects such as a recent one carried out the U.S. Department of Agriculture’s Missoula Fire Lab in Montana. There, inside a large controlled environment, researchers burnt various types of biomass to simulate wildfires, and then measured what came off. “We brought those emissions into a reactor we built, to simulate the aging of biomass burning plumes,” Kroll says.

One of his classes (Traveling Research Environmental Experiences, or TREX) also focuses on fieldwork. Every January during MIT’s Independent Activities Period, he co-leads a group of undergraduates to carry out air quality studies in Hawaii, monitoring the emissions and evolution of sulfur-containing gases emitted from the Kilauea volcano.

Part of all this effort aims to improve the detailed atmospheric models that are used to predict the progress of Earth’s changing climate and the factors affecting it. “There are large and persistent gaps between what models predict and what people measure,” in terms of the details of chemical interactions in the air, and even the amounts and compositions of these organic particles, he says, so it’s important to keep plugging away at understanding and reducing those discrepancies.

“The ultimate objective,” he says, “is to understand what policies could help, and what changes policymakers could make to minimize the negative health and climate effects of particulate pollution.”


February 1, 2017 | More

For refugee camps, a waterless toilet to improve health and safety

One of the most humiliating realities for Middle Eastern refugees involves a basic human need: going to the bathroom. At camps like Zaatari in Jordan, people walk miles and wait in endless lines to use unsanitary facilities, raising the possibility of disease.

The indignity is particularly crushing for girls and young women, who risk being attacked using communal toilets late at night. Others simply try not to go, and risk contracting urinary tract infections.

In response, some refugees have resorted to simply digging pits in the ground and trying to drain the sewage through trenches. It’s a grave sanitary hazard that affects more than 2 billion people worldwide.

Now, an MIT spinout, change:WATER Labs, plans to bring dignified sanitation to this population by developing a compact, evaporative toilet for homes without power or plumbing. Because sewage is mostly water, it’s possible to rapidly vaporize it, eliminating up to 95 percent of daily sewage volumes.

The change:Water Labs team includes: Diana Yousef, a research associate with MIT’s D-Lab; Huda Elasaad, a visiting scholar with MIT’s D-Lab; Conor Smith MBA ’18; and Yongji Wang and Yunteng Cao, PhD students in the MIT Department of Civil and Environmental Engineering.

The toilet has a polymer material that functions as a sponge, soaking up liquid water, released as water vapor into the air; it also contains residual waste, preventing pollution. Residue would be collected once or twice per month.

Co-founder Yousef, a biochemist, says her team will build their first prototype in the next several months, using a pilot partner in the Middle East who has offered one of its refugee shelters as a test site. She says the project could be transformative for refugees, especially young girls.

The team is gearing up to participate in the Hult Prize regional social entrepreneurship competition in March. This year’s theme is “Reawakening Human Potential,” and the winner receives $1 million toward their project. Change:Water Labs won the MIT qualifier round of the competition in December.

Smith credits his experience at MIT with helping to develop his innovative mentality.

“When I came to MIT, I knew that the entrepreneurship programs were well-known and strong, but the resources at Sloan and the greater MIT community have been even better and more plentiful than I expected.  In many ways, it has inspired my own endeavors and provided the connections to entrepreneurs with whom I’ve been able to bounce ideas around, seek advice, and collaborate,” Smith says.

To the change:Water Labs team, refugee camps are hopefully just the beginning.

“Safe sanitation for all is a motto and mission of the organization,” Smith says. “Initially, we’re focusing on refugee camps like Zaatari, where lack of affordable toilets have turned these camps into massive cesspools. And beyond the camps, there is incredible potential to apply this solution to the more than a billion non-sewered households around the world.”


January 31, 2017 | More

Research assistants at energy’s cutting edge

MIT graduate students working in energy conduct widely varied research projects — from experiments in fundamental chemistry to surveys of human behavior — but they share the common benefit of gaining hands-on work experience while helping to move the needle toward a low-carbon future.

“You learn about a lot of wonderful things in theory, in reference books, but you never really get a feel for [research] unless you’re actually involved in it,” says Srinivas Subramanyam, a PhD candidate in materials science and engineering whose work as a research assistant (RA) focuses on developing a lubricant-impregnated surface that may one day keep oil and gas pipelines free of clogs. “Having a research assistantship has been a very good experience.”

“I see this as a first step in a long-term research agenda that I hope to continue in my academic career,” says J. Cressica Brazier, a PhD candidate in urban studies and planning who is developing a mobile carbon footprinting tool to gauge personal energy consumption. Brazier says this RA work has given her a variety of skills — from statistical modeling to team building — that will help her continue to research low-carbon urban development in the years ahead.

The academic track isn’t the only option for well-trained RAs, however. Qing Liu, a PhD candidate in chemistry and a 2016-2017 Shell-MIT Energy Fellow, says he also feels qualified to work as a data scientist, energy analyst, or consultant. “I think the expertise I’ve gained from the research assistantship definitely helped broaden my career choices,” says Liu, whose research centers on a catalytic process that converts airborne pollutants to fuels.

Research assistants are paid to conduct research under the supervision of a faculty advisor, and they often pursue novel investigations of their own design — in many cases leading to doctoral theses and other peer-reviewed publications at the cutting edge of their fields. For this reason, RAs play a crucial role in moving the world toward a low-carbon energy system, says Antje Danielson, director of education at the MIT Energy Initiative (MITEI).

“RAs are the worker bees of the research projects, and they are the people who produce the data and the prototypes that will then lead to discovery and innovation, so they’re very valuable members of the energy innovation ecosystem. They are the future,” says Danielson, noting that Brazier, Liu, and Subramanyam were all supported by MITEI funding. “Meanwhile, they learn lab skills, analytical skills, and if this is their thesis project, they really learn how to analyze a specific topic and write up their findings.”

Making a difference

For Brazier, Liu, and Subramanyam — just three of the more than 2,500 graduate students who work as research assistants and research trainees at MIT — making progress toward a low-carbon energy system is a significant motivator.

“The only way I get motivated is if I know this is something that has the potential to make a difference. Abstract problems don’t really drive me,” Subramanyam says. Therefore, he focuses his research on addressing the range of problems caused by the deposition of materials on surfaces — for example, ice buildup on airplane wings, wind turbine blades, overhead powerlines, etc., and scale buildup in gas pipelines, geothermal power plants, and water heaters. “Having that end goal in mind — especially being aware that this is a product that’s important to MITEI — that keeps me working on the problem.”

During his research assistantship, Subramanyam succeeded in developing a surface treatment that significantly reduces scale buildup by combining two strategies: changing the morphology of the surface material and adding a coating. The resulting lubricant-impregnated surface promises to improve efficiency in the oil and gas industry by addressing productivity losses due to scale fouling, Subramanyam says.

Improving the efficiency of existing energy systems is also central to Liu’s research, which examines the fundamental catalytic chemistry behind the production of natural gas and liquid fuels using greenhouse gases and airborne pollutants. Liu’s work holds promise for the development of more efficient Fischer-Tropsch catalysts, a critical step in the attainment of carbon neutrality. “I definitely feel I’m helping to make the planet greener,” Liu says.

Brazier takes a different approach to energy research: She explores how human behavior impacts the greenhouse gas emissions that are contributing to climate change. “We need tools to moderate or mitigate how people use the increasing convenience and comfort that comes with new technologies,” Brazier says. She says she hopes the mobile application she is developing will provide individuals with feedback that will motivate greener lifestyle choices.

Gaining practical skills

Whatever specific research RAs focus on, along the way they learn to collaborate, communicate, and persuade others about the validity of their ideas. They also learn project management and how to think systematically about open-ended problems, says Kripa Varanasi, associate professor of mechanical engineering and Subramanyam’s advisor. “They learn a lot of practicalities of how to work in the real world,” he says.

“The scientific method, you first experience it once you start working in the lab yourself, confirming and rejecting potential solutions,” Subramanyam says. “You are pushing the boundaries of knowledge, trying to do things no one has ever done.”

Teamwork is critical, says Liu, noting that his research involves complex and specialized instrumentation that is very tough to operate alone. “There are two to three people on the same machine, working very closely with each other … so it’s really important to us to have good teamwork,” he says. “That’s something I couldn’t learn from class.”

Working with diverse researchers — including faculty members, postdocs, and fellow RAs from a variety of disciplines — rounds out the RAs’ educational experience, the students say. “In terms of really applying statistical tools, I learned more from one RA than I ever did from my sequence of quantitative methods courses,” Brazier says.

Ultimately, the RA experience can be transformative. “They come out of undergrad exposed to many subjects, but they haven’t really gotten their hands wet in a lab,” Varanasi says, noting that within a few years he sees major changes. “They become professionals.”

This article appears in the Autumn 2016 issue of Energy Futures, the magazine of the MIT Energy Initiative. 


January 31, 2017 | More

Optimizing code

Compilers are programs that convert computer code written in high-level languages intelligible to humans into low-level instructions executable by machines.

But there’s more than one way to implement a given computation, and modern compilers extensively analyze the code they process, trying to deduce the implementations that will maximize the efficiency of the resulting software.

Code explicitly written to take advantage of parallel computing, however, usually loses the benefit of compilers’ optimization strategies. That’s because managing parallel execution requires a lot of extra code, and existing compilers add it before the optimizations occur. The optimizers aren’t sure how to interpret the new code, so they don’t try to improve its performance.

At the Association for Computing Machinery’s Symposium on Principles and Practice of Parallel Programming next week, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory will present a new variation on a popular open-source compiler that optimizes before adding the code necessary for parallel execution.

As a consequence, says Charles E. Leiserson, the Edwin Sibley Webster Professor in Electrical Engineering and Computer Science at MIT and a coauthor on the new paper, the compiler “now optimizes parallel code better than any commercial or open-source compiler, and it also compiles where some of these other compilers don’t.”

That improvement comes purely from optimization strategies that were already part of the compiler the researchers modified, which was designed to compile conventional, serial programs. The researchers’ approach should also make it much more straightforward to add optimizations specifically tailored to parallel programs. And that will be crucial as computer chips add more and more “cores,” or parallel processing units, in the years ahead.

The idea of optimizing before adding the extra code required by parallel processing has been around for decades. But “compiler developers were skeptical that this could be done,” Leiserson says.

“Everybody said it was going to be too hard, that you’d have to change the whole compiler. And these guys,” he says, referring to Tao B. Schardl, a postdoc in Leiserson’s group, and William S. Moses, an undergraduate double major in electrical engineering and computer science and physics, “basically showed that conventional wisdom to be flat-out wrong. The big surprise was that this didn’t require rewriting the 80-plus compiler passes that do either analysis or optimization. T.B. and Billy did it by modifying 6,000 lines of a 4-million-line code base.”

Schardl, who earned his PhD in electrical engineering and computer science (EECS) from MIT, with Leiserson as his advisor, before rejoining Leiserson’s group as a postdoc, and Moses, who will graduate next spring after only three years, with a master’s in EECS to boot, share authorship on the paper with Leiserson.

Forks and joins

A typical compiler has three components: the front end, which is tailored to a specific programming language; the back end, which is tailored to a specific chip design; and what computer scientists oxymoronically call the middle end, which uses an “intermediate representation,” compatible with many different front and back ends, to describe computations. In a standard, serial compiler, optimization happens in the middle end.

The researchers’ chief innovation is an intermediate representation that employs a so-called fork-join model of parallelism: At various points, a program may fork, or branch out into operations that can be performed in parallel; later, the branches join back together, and the program executes serially until the next fork.

In the current version of the compiler, the front end is tailored to a fork-join language called Cilk, pronounced “silk” but spelled with a C because it extends the C programming language. Cilk was a particularly congenial choice because it was developed by Leiserson’s group — although its commercial implementation is now owned and maintained by Intel. But the researchers might just as well have built a front end tailored to the popular OpenMP or any other fork-join language.

Cilk adds just two commands to C: “spawn,” which initiates a fork, and “sync,” which initiates a join. That makes things easy for programmers writing in Cilk but a lot harder for Cilk’s developers.

With Cilk, as with other fork-join languages, the responsibility of dividing computations among cores falls to a management program called a runtime. A program written in Cilk, however, must explicitly tell the runtime when to check on the progress of computations and rebalance cores’ assignments. To spare programmers from having to track all those runtime invocations themselves, Cilk, like other fork-join languages, leaves them to the compiler.

All previous compilers for fork-join languages are adaptations of serial compilers and add the runtime invocations in the front end, before translating a program into an intermediate representation, and thus before optimization. In their paper, the researchers give an example of what that entails. Seven concise lines of Cilk code, which compute a specified term in the Fibonacci series, require the compiler to add another 17 lines of runtime invocations. The middle end, designed for serial code, has no idea what to make of those extra 17 lines and throws up its hands.

The only alternative to adding the runtime invocations in the front end, however, seemed to be rewriting all the middle-end optimization algorithms to accommodate the fork-join model. And to many — including Leiserson, when his group was designing its first Cilk compilers — that seemed too daunting.

Schardl and Moses’s chief insight was that injecting just a little bit of serialism into the fork-join model would make it much more intelligible to existing compilers’ optimization algorithms. Where Cilk adds two basic commands to C, the MIT researchers’ intermediate representation adds three to a compiler’s middle end: detach, reattach, and sync.

The detach command is essentially the equivalent of Cilk’s spawn command. But reattach commands specify the order in which the results of parallel tasks must be recombined. That simple adjustment makes fork-join code look enough like serial code that many of a serial compiler’s optimization algorithms will work on it without modification, while the rest need only minor alterations.

Indeed, of the new code that Schardl and Moses wrote, more than half was the addition of runtime invocations, which existing fork-join compilers add in the front end, anyway. Another 900 lines were required just to define the new commands, detach, reattach, and sync. Only about 2,000 lines of code were actual modifications of analysis and optimization algorithms.

Payoff

To test their system, the researchers built two different versions of the popular open-source compiler LLVM. In one, they left the middle end alone but modified the front end to add Cilk runtime invocations; in the other, they left the front end alone but implemented their fork-join intermediate representation in the middle end, adding the runtime invocations only after optimization.

Then they compiled 20 Cilk programs on both. For 17 of the 20 programs, the compiler using the new intermediate representation yielded more efficient software, with gains of 10 to 25 percent for a third of them. On the programs where the new compiler yielded less efficient software, the falloff was less than 2 percent.

“For the last 10 years, all machines have had multicores in them,” says Guy Blelloch, a professor of computer science at Carnegie Mellon University. “Before that, there was a huge amount of work on infrastructure for sequential compilers and sequential debuggers and everything. When multicore hit, the easiest thing to do was just to add libraries [of reusable blocks of code] on top of existing infrastructure. The next step was to have the front end of the compiler put the library calls in for you.”

“What Charles and his students have been doing is actually putting it deep down into the compiler so that the compiler can do optimization on the things that have to do with parallelism,” Blelloch says. “That’s a needed step. It should have been done many years ago. It’s not clear at this point how much benefit you’ll gain, but presumably you could do a lot of optimizations that weren’t possible.”


January 30, 2017 | More