DXPG

Total Pageviews

Sunday, February 24, 2013

Mozilla Builds an Operating System for a Firefox Smartphone

Apple and Google are getting a nonprofit competitor, which aims to swamp them by using Web-based technologies that act like smartphone apps.

Mozilla, the nonprofit company that created and maintains the Firefox Web browser, has announced it will release a smartphone operating system around June that handset makers can use to make an inexpensive phone. Mozilla envisions that the phone would be sold in poorer countries with few smartphones, probably for $80 to $100 before subsidies, Mozilla said.

By comparison, an iPhone 5 costs $650 to $850 without a subsidy from a carrier, which usually offers the phone for less but locks the customer into a long-term contract. An Android phone such as the Samsung Galaxy III S costs $600.

“This is the start of what will undoubtedly be a third ecosystem,” after Google’s Android and Apple’s iOS, Gary Kovacs, the chief executive of Mozilla, said in an interview before the announcement. “The next 2 billion smartphone users will come from the devloping world.”

Phone makers including LG, Huawei, TCL, and ZTE already plan to manufacture phones with the new operating system. On Sunday Mozilla showed models of phones from TCL and ZTE at a telecommunications industry meeting in Barcelona, Spain.

The display models of Firefox OS look and work much like the Apple or Google phones, and are available in more colors and with different bodies, including hard metal and rubber. Owners of the phone can send texts and e-mail, use a Web browser and camera among other features.

Just as important as a low price for the phone’s success, however, is how Mozilla plans to create and sell apps. Like Apple and Google, Mozilla plans to offer a store for its phones, in which developers can offer phone apps, like games, information, or specialized maps and some of them will be free.

Unlike Android or iOS, which require a developer to learn a special way of writing software for each respective operating system, developers on Firefox OS will! write their apps in HTML5, a commonly understood language for writing Web pages.

The Firefox OS marketplace will have just a few apps, including Wikipedia, Twitter, Accuweather, a Web news reader called Pulse, and an audio service called SoundCloud. Mr. Kovacs said the ease of learning to write for Firefox would make possible a rapid growth of apps in local languages and for different cultural needs.

“There are about 100,000 iOS developers, and 400,000 Android developers, but there are 10 million Web developers,” Mr. Kovacs said. “They will be able to work with this, almost out of the box. One million people in Brazil, or a half-million in Poland, would be enough.”

Besides Brazil, which has a population of about 196 million people, or Poland, with 38 million, initial countries to get the phone include Colombia, Hungary, Mexico, Montenegro, Serbia, Spain and Venezuela.

In early February, Microsoft and Huawei jointly announced a $150 smartphone aimed at seven African countries that runs on the Windows Mobile operating system. To date, Mozilla does not have a presence in Africa, one of the world’s fastest-growing mobile markets.

Mozilla announced the phone operating system along with with 15 different mobile phone operators, including China Unicom and Sprint, the four device manufacturers, and associated hardware providers. It is not yet clear how much each company will spend engaging outside developers, who still must learn how to build and sell apps instead of making Web pages. For the true believers in a free Web at Mozilla, that is worth the trouble, since apps tend to be controlled by the company that makes the operating system.

Besides the apps available on the phone, a search feature in the phone does an impressive job anticipating what the customer is searching for, with fast-moving graphics illustrating choices.

Typing in the letter â€! œj,” fo! r example, brought up a picture of Jackie Robinson. Adding two letters to make “jaz” suggested “jazz,” with a picture of Louis Armstrong. With “jaze,” the suggestion was “al Jazeera,” along with a photo of its newsroom.

While the phone could then go to the Arabic version of the Web site, it could not manage a conversion to English, suggesting the proprietors of the site would have to change something in their code to make it work for the phone.

Mr. Kovacs said that problems like that would be solved quickly, much the way some 400,000 volunteers have contributed to fixing bugs in Firefox since the first browser was introduced in 2002.

“Our quality assurance will follow the same path,” he said. “We get feedback and we address the problems.”



There Is an Algorithm for Everything, Even Bras

There Is an Algorithm for Everything, Even Bras

THE two and a half miserable hours that Michelle Lam spent in a fitting room, trying on bras, one fine summer day in 2011 would turn out to be, in her words, a “life-changing experience.” After trying on 20 bras to find one that fit, and not particularly well at that, she left the store feeling naked and intruded upon.

A screen shot from True&Co’s bra-fitting quiz. A customer starts by entering the size and manufacturer of her current favorite bra. The company then uses an algorithm to try to find a better fit.

“It occurred to me in that fitting room, as I was waiting for that saleswoman to bring me bras: Wow, this is the worst shopping experience on earth,” she said. (My wife concurs.) From her frustration that day emerged an idea for a business called True&Co.

The history of e-commerce is marked by start-ups devising ways to sell products that were once thought of as unsuitable for sale online. Shoes were not supposed to be something customers would buy online, but then Zappos showed it could be done. The same thing was said about eyeglasses, until Warby Parker came along. But bras, which are among the most personal items someone can buy, represent the Everest of online retail challenges.

Ms. Lam’s company opened True&Co last year along with two co-founders, Dan Dolgin and Aarthi Ramamurthy. The company, based in San Francisco, is certainly not the first to sell lingerie online. Older sites include the Web arm of Victoria’s Secret and HerRoom.com, which was founded in 1998, near the dawn of the Age of E-Commerce.

Professional bra fitters have also moved online. Linda Becker, whose family owns two bra stores in New York, says she sells twice as many bras online today at LindaTheBraLady.com as she does in her stores. Some of her online customers have previously visited one of her shops and been fitted in person. But new customers take their own measurements and work with customer service representatives on the phone. She says only 10 percent of online orders are returned.

But some customers turn out to be extremely hard to fit and it’s hard to tell why, Ms. Becker says. “That kind of customer will be impossible to fit online because the problem is unseen. There’s no way of figuring it out over the phone.”

True&Co’s innovation is to put a batch of bras into customers’ hands so they can choose what fits best. New customers take a quiz â€" modeled on the ones in Cosmopolitan magazine that Ms. Lam fondly remembers filling out in high school â€" to collect the information needed to fit the bra properly. They are then invited to pick three bras in different styles.

True&Co uses an algorithm to pick two additional bras to send out, based on what can be discerned from the customer’s choices. So the customer ends up with five bras to try on at home, with no obligation to buy. Most of the company’s bras are priced from $45 to $62.

The 15-question quiz asks for the customer’s band and cup size and the manufacturer of her current “best fitting (and beloved) bra,” and works from there to determine how the fit of that favorite bra could be improved. Other quiz questions include: “Do your cups runneth over” citing things like cleavage or underarms â€" or “No spills, all good.” The question “What is your shape” is followed by these choices: Well-Rounded, Bottom Happy, Taking Sides and Bottom & Sides.

“We have an algorithm that defines 2,000 body types,” Ms. Lam said. True&Co does not make customized bras for each of those 2,000 body types, however, so much of the taxonomy’s precision is lost when it must be translated into the far fewer combinations of band and cup measurements used by bra makers.

True&Co has drawn the attention of some skeptics. Last month, a blogger at Open Source Fashion, Sindhya Valloppillil, dismissed the company’s bra-fitting algorithm as “ridiculous,” arguing that a bra must be “touched and tried on.” She mocked the credulity of True&Co’s venture capital investors in a post titled “V.C.’s Think My Boobs Need an Algorithm.”

True&Co actually makes no patently ridiculous claims about the algorithm, which involves matching a woman’s body type to a particular bra based partly on consistent variations among manufacturers for a given size and style. One manufacturer’s 32C may work better for breasts of a certain shape, for example, even if a woman is used to buying a 34B.

Customers buy an average of two bras from each batch of five. The company says women end up buying more of the bras chosen by the algorithm than the ones they select themselves.

But as with shoes and eyeglasses, so too with bras: it’s love at first touch and try, even in the digital age.

Randall Stross is an author based in Silicon Valley and a professor of business at San Jose State University. E-mail: stross@nytimes.com.

A version of this article appeared in print on February 24, 2013, on page BU3 of the New York edition with the headline: There Is an Algorithm For Everything, Even Bras.

Sim City for Real

SimCity, for Real: Measuring an Untidy Metropolis

THE notion of a “science of cities” seems contradictory. Science is a realm of grand theory and precise measurement, while cities are messy agglomerations of people and human foible. But science is precisely the ambition of New York University’s Center for Urban Science and Progress. Founded last year, the center has been getting under way in recent weeks, moving into new office space and firing off its first project proposal to the National Science Foundation.

The center’s director is Steven E. Koonin, a Brooklyn native and graduate of Stuyvesant High School, who came to N.Y.U. after a stint in the Obama administration as the under secretary for science in the Department of Energy. He is both a theoretical physicist and science policy expert. The center shouldn’t lack for intellectual rigor.

The initiative at N.Y.U. is part of a broader trend: the global drive to apply modern sensor, computing and data-sifting technologies to urban environments, in what has become known as “smart city” technology. The goals are big gains in efficiency and quality of life by using digital technology to better manage traffic and curb the consumption of water and electricity, for example. By some estimates, water and electricity use can be cut by 30 to 50 percent over the course of a decade.

Cities from Stockholm to Singapore are deep into smart city projects. The market looms as big, lucrative business for technology companies. “The Smart City movement,” according to a report this month from IDC, a technology research firm, “is emerging and growing as a significant force of innovation and investment at all levels of government.” The N.Y.U. center’s partners include technology companies like I.B.M., Cisco Systems and Xerox, as well as universities and the New York City government.

City governments, like other institutions, have collected data for years to try to become more efficient. There have been some notable achievements, like CompStat, the New York Police Department’s system for identifying crime patterns, introduced in the mid-1990s and later widely adopted elsewhere.

What is different today, says Dr. Koonin, is that digital technologies â€" sensors, wireless communication, storage and clever software algorithms â€" are advancing so rapidly that it is becoming possible to see and measure activities in an urban environment as never before.

“We can build an observatory to be able to see the pulse of the city in detail and as a whole,” Dr. Koonin explains.

Dr. Koonin’s digital “observatory” of urban life raises questions about privacy. He is keenly aware of that issue, and vows that the center is engaged in science rather than surveillance. For example, individuals’ names or tax identification numbers would be stripped from personal records.

The collected data, he says, will be the raw material for modeling outcomes â€" say, the steps required to reduce electricity consumption in a high-rise office building or in an individual apartment. Those modeled predictions, he adds, can guide policy or inform citizens.

“I’d like to create SimCity for real,” Dr. Koonin says, referring to the classic computer simulation game.

To help, Dr. Koonin is forging partnerships with government laboratories to tap their expertise in building complex computer simulations, like climate models for weather prediction.

The path to SimCity will come step by step, through tackling specific projects. The first one is a program to monitor and analyze noise. The largest single cause of complaints to New York’s 311 phone and online service is noise. It is a quality-of-life issue, Dr. Koonin says, and one related to health, especially when noise disrupts sleep.

The 10-member project team includes music professors, computer scientists and graduate students. The group will use the city’s 311 data, but also plans to employ wireless sensors â€" tiny ones outside windows, noise meters on traffic lights and street corners, perhaps a smartphone app for crowdsourced data gathering. To inform policy choices, data on noise limits for vehicles and muffler costs might be added to the street-level noise readings. Then, computer simulations could predict the likely effect of enforcement steps, charges or incentives to buy properly working mufflers for vehicles without them.

The project, Dr. Koonin says, might also pull in data on traffic flows, garbage pickup times and building classifications. For example, he says, a 2 a.m. garbage pickup could be routed to a neighborhood with little residential housing.

A version of this article appeared in print on February 24, 2013, on page BU3 of the New York edition with the headline: SimCity, For Real: Measuring An Untidy Metropolis.

Dell’s Intentions Get a Hard Look

Dell’s Intentions Get a Hard Look

IS Michael Dell trying to take over the computer company he founded on the cheap

That’s what more and more Dell shareholders appear to believe about the $13.65 per-share price proposed on Feb. 5 by Mr. Dell and Silver Lake Partners, a technology investment firm. Initial objectors to the buyout have been joined by additional shareholders concerned about getting a fair shake.

The issue of fairness is a hazard of management-led buyouts, of course. Are insiders, who have an enormous information advantage owing to their deep knowledge of a company’s operations, trying to get control of an enterprise when its shares are perhaps temporarily depressed Over the last year, Dell’s stock has lost 19 percent of its value.

Some investors wonder if Mr. Dell, who owns 14 percent of the shares outstanding, might have a hot new product on the drawing board that has the potential to make the company a highflier again.

Neither management nor Mr. Dell is saying much of anything about the company’s prospects. Last Tuesday, when Dell announced mixed earnings for the year, the company declined to make any projections for coming quarters on the conference call with investors and analysts. Its chief financial officer cited the pending deal as the reason no outlook was given.

As is the case with all insider deals, there’s great potential for outside shareholders to be treated unfairly. Making the deal even more problematic, Dell’s shareholders have little data upon which to assess its price. Dell’s regulatory filings say that the $13.65 per-share price is the result of extensive “bids and arms-length negotiations” between Silver Lake and the special committee of Dell’s board beginning in late October 2012.

Still, there’s no mention of how the $13.65 per-share offer stacks up against the company’s long-term enterprise value, an assessment of future earnings potential that is a typical measure in a takeover. Instead, the offer by Mr. Dell and Silver Lake seems based on the company’s recent stock price. Their $24.4 billion deal represents a 37 percent premium to the stock’s average price over the previous three months, they say.

Meanwhile, Southeastern Asset Management, one of Dell’s largest outside shareholders, estimates that the company is worth $23.72 a share, almost 75 percent more than the buyers are offering. Southeastern has come to that conclusion using publicly available information, however, because that’s all it has access to.

Naturally, both of these parties have a vested interest in getting their price in the deal. Mr. Dell and his group want to pay as little as possible, while long-suffering outside owners hope for more.

Trying to remedy this unsatisfying situation, an uninvolved investor organization has made an excellent suggestion: an independent, peer-reviewed analysis of Dell’s enterprise value should be done on behalf of its outside shareholders. Based on the same information Dell’s management has, such an assessment would assure investors that they are being bought out at a fair value.

This idea comes from the Shareholder Forum, a nonpartisan, independent creator of programs devised to provide the kind of information investors need to make astute decisions. The Forum, overseen by Gary Lutin, a former investment banker at Lutin & Company, suggests hiring a qualified expert to analyze the company’s operations. This would be similar to the so-called fairness opinions provided to shareholders in takeovers by outsiders. The analysis would be subject to confidentiality when necessary and would be reviewed by recognized analysts, academics and other investment professionals.

On Feb. 14, Mr. Lutin sent a letter to Mr. Dell and Alex Mandl, chairman of the special committee of Dell’s board charged with ensuring the deal’s fairness to all shareholders. In the letter, Mr. Lutin asked that the company support the independent analysis and provide assistance in its preparation.

Mr. Lutin said he had assumed that the board committee and Mr. Dell would want to support this project. “Shareholders have a very well-established right to any information relevant to their investment decisions under Delaware law,” Mr. Lutin said last week. “They also have the right to expect management to be responsible for addressing those interests.”

But last week, Mr. Lutin said that lawyers representing Mr. Mandl and his committee told him they would not be supporting the independent analysis.

A version of this article appeared in print on February 24, 2013, on page BU1 of the New York edition with the headline: Dell’s Intentions Get a Hard Look.

Disruptions: Data Without Context Is Worth Nothing at All

Several years ago, Google, aware of how many of us were sneezing and coughing, created a fancy equation on its Web site to figure out just how many people had influenza. The math works like this: people’s location + flu-related search queries on Google + some really smart algorithms = the number of people with the flu in the United States.

So how did the algorithms fare this wretched winter According to Google Flu Trends, at the flu season’s peak in mid-January, nearly 11 percent of the United States population had influenza.

Yikes! Take vitamins. Don’t leave the house. Wash your hands. Wash them again!

But wait. According to an article in te science journal Nature, Google’s disease-hunting algorithms were wrong: their results were double the actual estimates by the Centers for Disease Control and Prevention, which put the coughing and sniffling peak at 6 percent of the population.

Kelly Mason, a public affairs spokeswoman for Google, said the company’s Flu Trends site was meant to be only one source in addition to the C.D.C. and other flu surveillance methods. “We review and potentially update our model each season,” she said.

Scientists have a theory about what went wrong, as well.

“Several researchers suggest that the problems may be due to widespread media coverage of this year’s severe U.S. flu season,” Declan Butler wrote in Nature. Then add social media,! which helped news of the flu spread quicker than the virus itself.

In other words, Google’s algorithm was looking only at the numbers, not at the context of the search results.

In today’s digitally connected world, data is everywhere: in our phones, search queries, friendships, dating profiles, cars, food, reading habits. Almost everything we touch is part of a larger data set. But the people and companies that interpret the data may fail to apply background and outside conditions to the numbers they capture.

“Data inherently has all of the foibles of being human,” said Mark Hansen, director of the David and Helen Gurley Brown Institute for Media Innovation at Columbia University. “Data is not a magic force in society; it’s an extension of us.”

Society has encountered similar situations for centuries. In the 1600s, Dr. Hansen said, an early census was recorded in England as the Great Plague of London killed tens of thousands of Britons. To calculate the spread of the disease, officials started recording every christening and death in the city. And although this helped quantify the mortality rate, it also created other problems. There was now an astounding collection of statistical information for scientists to review and understand, but it took time to develop systems that could accurately assess the information.

Now, as we enter a world of big data, we have to learn how to apply context to these numbers.

Dr. Hansen said the problem of data without context could be summed up! in a quo! te from the playwright Eugène Ionesco: “Of course, not everything is unsayable in words, only the living truth.”

I experienced this firsthand in the spring of 2010, when I was an adjunct professor at New York University teaching graduate students in the Interactive Telecommunications Program.

I created a class called “Telling Stories With Data, Sensors and Humans,” with the goal of determining whether sensors and data could become reporters and collect information. Students built little electronic contraptionswith $30 computers called Arduinos, and attached several sensors, including ones that could detect light, noise and movement.

We wondered if we could use these sensors to determine whether students used the elevators more than the stairs, and whether that changed throughout the day. (Esoteric, sure, but a perfect example of a computer sitting there taking notes, rather than a human.)

We set up the sensors in some elevators and stairwells at N.Y.U. and waited. To our delighted surprise, the data we collected told a story, and it seemed that our experiment had worked.

As I left campus that evening, one of the N.Y.U. security guards who had seen students setting up the computers in the elevators asked how our experiment had gone. I explained that we had found th! at studen! ts seemed to use the elevators in the morning, perhaps because they were tired from staying up late, and switch to the stairs at night, when they became energized.

“Oh, no, they don’t,” the security guard told me, laughing as he assured me that lazy college students used the elevators whenever possible. “One of the elevators broke down a few evenings last week, so they had no choice but to use the stairs.”

E-mail: bilton@nytimes.com



Disruptions: Data Without Context Is Worth Nothing at All

Several years ago, Google, aware of how many of us were sneezing and coughing, created a fancy equation on its Web site to figure out just how many people had influenza. The math works like this: people’s location + flu-related search queries on Google + some really smart algorithms = the number of people with the flu in the United States.

So how did the algorithms fare this wretched winter According to Google Flu Trends, at the flu season’s peak in mid-January, nearly 11 percent of the United States population had influenza.

Yikes! Take vitamins. Don’t leave the house. Wash your hands. Wash them again!

But wait. According to an article in te science journal Nature, Google’s disease-hunting algorithms were wrong: their results were double the actual estimates by the Centers for Disease Control and Prevention, which put the coughing and sniffling peak at 6 percent of the population.

Kelly Mason, a public affairs spokeswoman for Google, said the company’s Flu Trends site was meant to be only one source in addition to the C.D.C. and other flu surveillance methods. “We review and potentially update our model each season,” she said.

Scientists have a theory about what went wrong, as well.

“Several researchers suggest that the problems may be due to widespread media coverage of this year’s severe U.S. flu season,” Declan Butler wrote in Nature. Then add social media,! which helped news of the flu spread quicker than the virus itself.

In other words, Google’s algorithm was looking only at the numbers, not at the context of the search results.

In today’s digitally connected world, data is everywhere: in our phones, search queries, friendships, dating profiles, cars, food, reading habits. Almost everything we touch is part of a larger data set. But the people and companies that interpret the data may fail to apply background and outside conditions to the numbers they capture.

“Data inherently has all of the foibles of being human,” said Mark Hansen, director of the David and Helen Gurley Brown Institute for Media Innovation at Columbia University. “Data is not a magic force in society; it’s an extension of us.”

Society has encountered similar situations for centuries. In the 1600s, Dr. Hansen said, an early census was recorded in England as the Great Plague of London killed tens of thousands of Britons. To calculate the spread of the disease, officials started recording every christening and death in the city. And although this helped quantify the mortality rate, it also created other problems. There was now an astounding collection of statistical information for scientists to review and understand, but it took time to develop systems that could accurately assess the information.

Now, as we enter a world of big data, we have to learn how to apply context to these numbers.

Dr. Hansen said the problem of data without context could be summed up! in a quo! te from the playwright Eugène Ionesco: “Of course, not everything is unsayable in words, only the living truth.”

I experienced this firsthand in the spring of 2010, when I was an adjunct professor at New York University teaching graduate students in the Interactive Telecommunications Program.

I created a class called “Telling Stories With Data, Sensors and Humans,” with the goal of determining whether sensors and data could become reporters and collect information. Students built little electronic contraptionswith $30 computers called Arduinos, and attached several sensors, including ones that could detect light, noise and movement.

We wondered if we could use these sensors to determine whether students used the elevators more than the stairs, and whether that changed throughout the day. (Esoteric, sure, but a perfect example of a computer sitting there taking notes, rather than a human.)

We set up the sensors in some elevators and stairwells at N.Y.U. and waited. To our delighted surprise, the data we collected told a story, and it seemed that our experiment had worked.

As I left campus that evening, one of the N.Y.U. security guards who had seen students setting up the computers in the elevators asked how our experiment had gone. I explained that we had found th! at studen! ts seemed to use the elevators in the morning, perhaps because they were tired from staying up late, and switch to the stairs at night, when they became energized.

“Oh, no, they don’t,” the security guard told me, laughing as he assured me that lazy college students used the elevators whenever possible. “One of the elevators broke down a few evenings last week, so they had no choice but to use the stairs.”

E-mail: bilton@nytimes.com



Samsung’s New 8-Inch Tablet Takes on the iPad Mini

Samsung Electronics introduced a new 8-inch tablet, competing directly with Apple's 7.9-inch iPad Mini.