Archive for December, 2009

A Happy Holiday to All

December 24, 2009

A sort of hush is beginning to settle even over the tech world–there hasn’t been a decent new Apple Tablet rumor in hours–so Christmas is nearly upon us.

I’m taking off tomorrow to spend the week with family–both sons, daughter-in-law, and grandson–in southwest Florida–then back just in time to haul off to CES. I’ll post over the holiday if the spirit, or the news, moves me, but chances are it won’t.

Meanwhile, for those who get to enjoy the break, have fun. And for those who are stuck with CES preparations, my sympathies and see you in Vegas.

Another Outage: What’s Up With BlackBerry?

December 23, 2009

BlackBerrty with red XHas Research In Motion joined the ever growing ranks of service providers that put their operations at peril by releasing inadequately tested software? For the second time in a week, BlackBerry users in the western hemisphere were without service for a period of several hours on Tuesday. For a service whose stock in trade is rock-solid reliability and security, this is becoming a real business problem.

The official explanation for what went wrong, issued this morning, is neither enlightening nor encouraging:

“A service interruption occurred Tuesday that affected BlackBerry customers in the Americas. Message delivery was delayed or intermittent during the service interruption. Phone service and SMS services on BlackBerry smartphones were unaffected. Root cause is currently under review, but based on preliminary analysis, it currently appears that the issue stemmed from a flaw in two recently released versions of BlackBerry Messenger (versions 5.0.0.55 and 5.0.0.56) that caused an unanticipated database issue within the BlackBerry infrastructure.  RIM has taken corrective action to restore service.
“RIM has also provided a new version of BlackBerry Messenger (version 5.0.0.57) and is encouraging anyone who downloaded or upgraded BlackBerry Messenger since December 14th to upgrade to this latest version which resolves the issue.  RIM continues to monitor its systems to maintain normal service levels and apologizes for any inconvenience to customers.”
BlackBerrys’ greatest strength is also its greatest weakness. RIM can guarantee security, and, most of the time, reliability, by channeling communications through its network operation centers. Other smartphones fetch and send mail by talking directly to mail servers, whether it’s a corporate Exchange server, Hotmail, or a personal ISP account somewhere. BlackBerrys talk only to RIM’s servers and those servers handle communications with the mail servers. Other data services, including Web browsing, also go through the RIM NOC. The NOC is the bulwark of the BlackBerry system, but it is also a single point of failure.
What is particularly disturbing about  the RIM explanation is that the release of a new version of client software was able to bring down the network. RIM doesn’t tell us just how this happened, but it suggests poor programming practice, inadequate testing, or both.
RIM is the only smartphone maker other than Apple to gain market share since the release of the iPhone.  RIM’s value proposition, never explicitly stated but always lurking at the back of its marketing, is that while the iPhone may be a lot of fun, the BlackBerry is the serious tool for folks who need to get things done. Two major outages in a week (the  system was down for many users for several hours on Dec. 17) badly undercut that message. RIM is lucky that these failures occurred at a time of year when people’s attention to business is at a low ebb and the bad publicity has been relatively minor, but it cannot afford for this sort of thing to go on.

Email: Who Owns the Copyright?

December 21, 2009

In an post on Poynter Online’s E-Media Tidbits Friday, Paul Bradshaw discusses a situation where the subject of an e-mail interview wanted to publish the full text of the exchange in a blog, but the journalist who conducted the epistolary interview objected. Bradshaw’s piece, and the comments on it, deal primarily with questions of journalistic practices and ethics, but I wonder about the legal aspects.

As a matter of long-established law the writer of a letter holds the copyright to the content (anything you write is automatically protected by copyright, though formal registration is a useful step if you ever have to defend the right.) Since copyright extends to electronic media, emails are clearly the property of the author.

But the parallel between mail on paper and email breaks down quickly. Most email software, by default, appends the original message to any response, so any sort of email exchange quickly becomes the work of two or more authors. Who owns the rights to this hybrid? If I am interviewed by email, do I need the permission of the interviewer to publish the transcript? Can I publish the answers without the questions? Can I paraphrase the questions in my own words?

If there is any case law on this subject, I haven’t seen it. Does anyone out there know if courts have weighed in on the question?

Smartphone as Remote: An Idea Whose Time Has Come

December 20, 2009

Over the years, I have tried a variety of “universal” remote controls for my menagerie of audio and video devices and I’ve always ended up going back to the messy pile of remotes. Universal remotes, while a great idea in theory, just don’t work very well in practice. They are too hard to program and it’s too hard to remember just how they work with each device.

Sonos iPhone Controller

Sonos iPhone App

Lately, however, I have used an iPhone and an iPod Touch to control music systems from Sonos and Olive Media, as well as a rarely used Apple TV. And these work very well.

The reason is simple. Instead of having to program a remote to control a specific device and then learn just home the remote works with that device, the iPhone simply lets you download an app. In the case of Sonos, the iPhone app simply turns you phone into the functional equivalent of a $349 Sonos Controller 200. If your iPhone or Touch is running on your home network over Wi-Fi, it is a very simple process to connect it to the Sonos ZonePlayers on that network. Because of the iPhone’s multitasking limitations, you can’t actively use the remote while on a call, but this turns out not to be a very significant restriction in practice.

Controlling a $1,499 Olive 4 music server with an iPhone or Touch is slightly less satisfying. The main issue is setup. Unlike the Sonos app, which locates your players on the network, the Olive controller app requires you to enter the numerical network address of the server manually. This isn’t terribly difficult, since it is easy to display the IP address on the Olive 4’s touch display panel, but it is a geeky and unnecessary step. On the other hand, once set up, the Olive app works much better than the rather clumsy infrared control supplied with the server and gives you all the functionality of the display panel from anywhere on the network.

Traditional remotes transmit pulses of infrared light to control devices. This has a number of drawbacks: Communication is one-way only, the remote must have a clear line of sight to the device it is controlling, and the pulse codes give you a limited repertoire of functions. Some device makers have gotten around some of these restrictions by using proprietary radio communications, but the proprietary nature makes it difficult or impossible for universal remotes to replicate their functions. Bluetooth would be an option, but with the exception of the Sony PlayStation 3, hardly any consumer electronics devices other than phones support it.

The opening for smartphones is being created by the fact that wired or Wi-Fi network connections are now becoming ubiquitous on audio-visual devices of all sorts. And once a device is on the network, it can be controlled over the network. There’s no particular reason why this role should be limited to iPhones other than the richness of the iTunes App Store environment. Any Wi-Fi handset capable of downloading apps should work fine as a remote. All that really needs to be done at this point is for the makers of networked TVs, set top boxes, Blu-ray players, game consoles, and every other sort of device to publish the apps. (In my crowded video rack, everything but the TV display, an old DVD player, and an ancient VHS tape player is networked.)

The day may not be far away when I can finally consign all those IR remotes to a drawer somewhere and forget about them.

Bing on iPhone and the New Microsoft

December 16, 2009

Not that long ago, Microsoft operated on a very simple principle: What’s good for Windows is good for Microsoft. Initiatives within the company, from enterprise software to smartphones, stood or fell on the basis of how much they contributed to the core Windows effort.

This contributed mightily to Microsoft’s growth, power, and profits in the days when the PC was king. But as the PC and Windows have become less on less dominant in our technological lives, the Windows above all rule has made less and less sense and has begun to fall away. And that is why today a very good and free Bing app made its appearance in the iTunes App Store.

In fairness, Bing is all about driving search traffic and advertising revenue to the Microsoft service and the effort has been OS- and platform-agnostic from the get-go. Whereas the original MSN was highly Windows-centric, Bing was designed to work in any browser and with any OS. The push behind Bing also reflects Microsoft’s recognition of the new power structure of the industry. While CEO Steve Ballmer enjoys bashing Apple at every opportunity, Google is the threat that matters. And with once cozy relations between Apple and Google growing increasingly frosty, there may even be some room for common cause.

The iPhone app comes after Bing apps for Windows Mobile and BlackBerry (but, tellingly, not Google;s Android.) The iPhone version, however, is considerably superior to the offerings on other platforms. It;s very clean home page focuses on six Bing strengths: image search, local business search, movies, news, maps, and directions.

The maps are particularly good; I lied them better than the Google-based iPhone Maps app. You have a choice among schematic road maps, “shaded” maps that show topographical features, satellite images, and satellite-road hybrids, all available with an overlay of traffic conditions. You can get turn-by-turn instructions but like those in Maps, and unlike the news Google Maps app from Android, you don’t get real-time navigation. A GPS-based image shows your current position on the map, but you have to move manually from instruction to instruction.

Voice search is supported throughout the application. I found it worked OK, but not as well as Android voice search on the Motorola Droid.

All in all, the Bing app is a welcome addition to the iPhone. And a solid step by Microsoft toward the World After Windows.

Time To Make Phone Calls Sound Better?

December 15, 2009

In the amazing advance of technology of the past half century or so, there’s been one unfortunate constant. The audio quality of phone calls today is pretty much what it was in the mid-1930s. Just about everything involved in the transmission of phone calls during that time, but because the public switched telephone networked is locked into audio protocols that allow just 3 kHz of bandwidth, voice quality is frozen in time.

A loose confederation of hardware makers, software companies, and service providers, including tele- and videoconferencing specialist Polycom, voice-over-IP chipmaker DSP Group, and VoIP service provider phone.com is aiming to change this. Flying under the banner of Connect HD–yet another bad analogy to HD television–their plan is to take advantage of abundant bandwidth and digital technologies to improve voice quality.

They make a good case. Human speech typically ranges from about 80 to 14.000 Hz, but traditional phone systems chop off everything below 300 and above 3,300 Hz. That’s a wide enough frequency response to provide intelligible speech–barely, but our brains are remarkable good at making up for what our ears don’t hear. Most of the time, we can differentiate between a “b” and a “p” sound over the phone, even though the audio cues that make the difference between these voiced on unvoiced plosive consonants live in the higher frequencies that are lost over phone circuits. HD Connect wants to provide at least 7 kHz of frequency response, though this can often be provided in less actual bandwidth through the miracles of digital signal processing.

HD Connect is wisely not trying to impose a single standard. It will support multiple codecs–the software that encodes and decodes audio as digital signals so that two HD Connect handsets, speakerphones, or other endpoints should be able to negotiate a common protocol. Where necessary, systems such as conference bridges can convert, or transcode, signals.

Both intelligibility and overall audio quality of speech were greatly enhanced in the HD Connect demos that I tried. There was also a much great sense of presence, the feeling that the person you are talking to might  actually be in the same room. In a conference call, individual voices are much easier to differentiate and some accents are much easier to understand.

Still, I think the HD connect folks face a tough road ahead. We have managed to create a wireless phone system that reproduces the miserable voice quality of the landline network it is rapidly replacing. The wireless carriers are not about to change their protocols, which in any event would require all of us to replace our mobile handsets. in theory, we could all be making VoIP calls on our smartphones, and with VoIP, software alone can determine the audio quality. But the wireless carriers have a vast investment in voice infrastructure and terrible quality or no, they are not going give it up without a fight.

My guess is that Connect HD technology will make its first inroads in what amount to walled gardens, such as speakerphones and conferencing systems that run on internal corporate VoIP networks. For telephony at large, however, lousy audio quality, though easily preventable, is likely to be with us for a long time to come.

Netbooks: Time to Wait?

December 14, 2009

If you were thinking of treating yourself or someone else to a netbook for Christmas, you might do better to wait a bit. Of course when buying technology, there’s always the fear that any time you buy you are risking instant obsolescence. But in the case of diminutive, low-cost laptops, that fear right now should be stronger than usual.

The Consumer Electronics Show at the beginning of January will see a spate of announcements of netbooks using a new Intel platform codenamed Pine Trail, based on a new version of the Atom processor called Pineview (blame Intel, not me for these codenames.) The promise: higher performance without compromising, and perhaps even improving, battery life. You can expect to see Pine Trail-based netbooks from every laptop manufacturer early in 2010.

One of the big issues for netbooks has been their weak graphics performance. This has been most evident when playing high-quality video sites that run on Adobe Flash, such as Hulu. The combination of more processing power and a new 10.1 version of the Flash player are supposed to cure the problem. But we’ll have to wait to see how these new systems compared in both performance and battery life with the currently available combination of an Atom processor and Nvidia’s Ion chipset, which is found in some higher-end netbooks such s the Samsung NP-N110 or the Lenovo IdeaPad S12. These systems have typically offered much better graphic performance, but at some penalty in battery life.

A little further down the rod is a new class of laptop, sometimes called a smartbook, that is based on an ARM processor, to sort generally used to run smartphones, rather than the Atom, which uses the same x86 instructions as Intel’s laptop and desktop chips. These smartbooks won’t run Windows, which works only on x86, but some flavor of Linux, including Google’s Android. Unlike netbooks, which at least in theory have all the functionality of bigger laptops, smartbooks will be designed primarily to browse the Web and will probably offer little in the way of native applications; don;t expect to run Microsoft Word or Outlook on one.

In a slightly premature announcement, Qualcomm  put out the word last month that Lenovo will show a smartbook based on Qualcomm’s Snapdragon processor at CES. Other announcements are expected at the Mobile World Congress in Barcelona in February. Some netbooks are beginning to be sold by wireless carriers at subsidized prices in exchange for a two-year wireless data contract. This will be the dominant way that smartbooks are sold.

The products are intended to fill a niche–which may or may not turn out to exist–between small x86 notebooks and smartphones. And they are likely to end up competing in that space with Apple’s still mythical tablet.

Amazon’s New Video Deal: A Move Toward Pay Once, Play Anywhere

December 10, 2009

Amazon announced today that if you buy “selected” DVD or Blu-ray copies of movies and TV shows, you will receive both your disks and the right to stream or download the selections from Amazon’s Video On Demand service. My first reaction was to agree with John Gruber’s Daring Fireball post: “If you’re buying a Blu-ray disc, why would you want to watch a standard-def On Demand version?”

On further reflection, however, I realized that this could be the start of something important. The studios have long had this absurd dream that they could get us to pay multiple times for the same content. We’d pay to see a movie in the theater, then buy a DVD, then pay again to download or stream the film. Only we wouldn’t. And faced with reality,  the studios’ resistance is slowly–very slowly–breaking down.

The answer to Gruber’s question is actually fairly obvious: DVDs and especially Blu-ray Discs are for watching on your big screen TV. But these days, we consume media in many places on many devices and playing a physical disk on them is inconvenient and often impossible. Sometimes we need that content in downloadable or streaming form. And while the geekier among us know how to rip the DVD and transcode it for the device of our choice, that solution is 1) technically illegal, even if you own a physical copy of the content and 2) a royal pain.

Amazon’s solution is only a baby step. While a number of set top solutions support Amazon movies, including Roku players, some TiVos, and an assortment of TV sets and Blu-ray players, these attach to the big screen and that’s what you bought the disks for. It also works on Macs and Windows PCs, which gets a little more interesting; you can watch the content on a laptop in a Wi-Fi-less airplane without dragging the discs along, or on a DVD-less netbook. But it won’t work for iPhones or any other handheld devices.

An equally big problem is that qualifier of “selected” content. Only a relatively small fraction of Amazon’s video catalog is available for Disc+ On Demand. Amazon, typically, did not give any numbers, but the available offerings seem to be long on classics and short on recent hits. This of course, is not Amazon’s fault. Left to their own devices, I suspect they would extend Disc+ to the entire catalog. But the studios control the availability, and the studios are still stingy.

Those are pretty big qualifications. But while the step is small, it’s in the right direction. The resistance is crumbling slowly, but it is crumbling.

TSA and Cyber Security: The Real Problem

December 9, 2009

Like many folks who travel, I have long thought the Transportation Security Administration’s screening procedures frequently showed a lack of common sense. But the lack of sense, and of sensible security procedures, has never been more glaring than in the release of a TSA screening manual on the Web.

TSA logoI don’t claim any expertise in transportation security and I have no idea of what the impact of this easily preventable breach will be. I am much more concerned with what it says about the general state of information security at TSA and its parent agency, the Department of Homeland Security, which is supposed to play a major role in the federal government’s cyber security efforts.

The 93-page manual was posted on a government Web site designed to give information to potential contractors. It’s not quite clear whether a scrubbed version of the document should ever have been put up for public view, but the big problem was that TSA used a totally inadequate method of blacking out the sensitive text in the Adobe Acrobat document. A blog called The Wandering Aramean posted the word that the document was available along with handy instruction on how to read the redacted material. The original TSA post has been taken down, but Crytome.org has posted two versions, one with the redactions removed and one that has been securely redacted. (Both are zipped PDF files.)

It’s the existence of the securely redacted file that should be most embarrassing to TSA and DHS. Back in 2005, in response to a series of breaches involving insecure redaction of both Microsoft Word and Acrobat biles by businesses and government agencies, the Information Assurance Directorate of the National Security Agency released a handy and thorough handbook (PDF) on secure redaction of documents.

You would think that after four years, anyone charged with protecting sensitive information in documents through redaction would have this document at his or her fingertips, if not committed to memory. The fact that DHS has not imposed this sort of security discipline throughout the agency is what the hordes of investigators now pouncing on TSA should really be looking at. And we can only hope that the who incident will serve as a warning to both businesses and government agencies that there is a right way and a wrong way to redact documents before publication, and failing to take the simple steps needed to do it right can be very costly.

Intel’s Struggles in Graphics Continue with Shelving of Larrabee

December 7, 2009

Intel has long believed that computers work best when they are designed with powerful general-purpose processors that do just about everything. That’s not too surprising since Intel’s success has been built almost entirely on general-purpose processors, from the 8086 of the late 1970s to today’s Core family.

But there has always been a counter-thrust in the semiconductor industry that has promoted specialized silicon to handle specific jobs, such as signal processing. Overall, the fight has been something of a draw, with Intel owning the PC market and the makers of more specialized chips such as Texas Instruments winning in the world of smartphones, cameras, and the like.

In the last couple of years, however, the world of PC graphics has become a major battleground between these two philosophies. And right now, it looks like Intel is in some trouble. Intel still supplies the graphic adapters for the great majority of PCs sold. But these are low-end systems that have graphics adapters integrated into the chipsets that support the main processor and that share memory with the processor and everything it does. The high-end part of the business is dominated by separate, or discrete, graphics adapters that feature dozens of specialized processors and large amounts of dedicated memory. This business is owned by Nvidia and the ATI unit of AMD.

The problem is that the graphics demands of computers are soaring. At one time, only gamers, artists and graphics designers, scientists, and engineers really cared very much about graphics quality. Now, nearly everyone is running graphics-intense applications, particularly high-quality video. To see the difference an improvement in graphics processing can make, all you have to do is look at Hulu.com on a standard Atom-powered netbook with Intel’s integrated graphics and one. such as the Lenovo IdeaPad S12 netbook with Nvidia’s Ion graphics.

Intel recognized this growing problem a couple of years ago and started development of its own discrete graphics adapter chip. But the project, called Larrabee, has gone anything but smoothly, and after falling further and further behind schedule, Intel has had to admit that its plans to introduce a Larrabee product in the next year or two are dead.

“Larrabee silicon and software development are behind where we had hoped to be at this point in the project,” Intel spokesman Nick Knupffer wrote in an email. “As a result, our first Larrabee product will not be launched as a standalone discrete graphics product, but rather be used as a software development platform for internal and external use.”

The announcement is likely to cause much gloating at Nvidia and AMD.  Nvidia CEO Jen-Hsun Wang has stopped just short of saying it really doesn’t matter what sort of general-purpose processor you have in a computer as long as you have enough firepower in your graphics adapter, now more often called a graphics processing unit. Nvidia has been promoting a technology called Cuda that offloads processing chores from the CPU to the GPU. AMD, which of course sells both general-purpose processors and GPUs, has been singing a less aggressive version of the same tune.

Despite its setbacks with Larrabee, Intel can’t afford to get out of this game. There’s a major push within the industry to make GPUs major players in computation, not just graphics. Apple is leading a push for a software standard called OpenCL that will help accomplish this and most major players, including Intel, have signed on. The question, now more than ever, is just when Intel will be able to get its own dog into this fight.

“The performance of the initial Larrabee product for throughput computing applications — as demonstrated at SC09 — is extremely promising and we will be adding a throughput computing development platform based on Larrabee, too,” said Knupffer’s email. “While we are disappointed that the product is not yet where we expected, we remain committed to delivering world-class many-core graphics products to our customers. Additional plans for discrete graphics products will be discussed some time in 2010.”