Posts Tagged Capsule Tech

Capsule: Hotel Robot

Oh man.  Oh man, oh man, oh man.  Sorry, this has me just a little bit creeped out, and anyone who has been beta reading Capsule might understanding.

A new hotel in New York City has a robot arm, looking quite a bit like one you would see assembling cars, working as an employee.  It’s job?  Holding onto the luggage of guests who have either shown up before check-in time, or need a place to stow their bags between checking out and actually leaving town.  It costs what I would normally tip for this service, maybe even a little less, and for that it will pull a box off a shelf, remember which one your luggage is in, and retrieve it for you when you want it back.

First they’ll use robots to put our luggage in the right place, then they’ll use them to put us in the right place.


No Comments

Capsule Tech: PaperPhone and Snaplet

Though the novel itself has been somewhat placed on the back burner, I still look for an enjoy those little bits of real-world technology that bring things just that one step closer to the 2070s I’m building for Capsule.  Today, it’s the PaperPhone, as profiled on Engadget.  It uses a new technology called ductile e-ink to provide a display that can be curved, rolled, and even worn on the wrist.  Now, as e-ink it’s got a display much like the Kindle, which means gray-scale, but also with a load time between screens and no ability for animation.  Looks like it’s also awkward for receiving calls.  But this isn’t intended to be consumer grade, yet.  It’s a proof of concept, and the concept is pretty damn cool.

Part of the goal is to see just how people would interact with such a device, what gestures feel natural, something to keep in mind whenever developing technology either in real life or in literature. People want technology to be comfortable to use, and they want gestures that make sense while not being overly complicated. Sure, pretending to crumple something up and toss it over your shoulder might seem a logical gesture for throwing something away, but it’s too complicated. But simpler gestures like page turning, and sliding are more logical and easier to perform. Just look at any of the videos of people, whether very young or old, using the iPad for the first time and requiring no explanation of the gestures involved. And on the other end…well, there’s GMail’s April Fools video:

Overwrought gestures in real life would result in lack of adoption and frustrated users. In literature or on screen, they’re just silly. Which is fine if that’s what you’re going for, but otherwise keep the gestures simple and intuitive.

, , , , ,

No Comments

Capsule Tech: Teddy Bear

Of all the stuff I put in Capsule intending it to be creepy, the one thing that has hit the most people as actually creepy is something I intended to be sweet.  Well.  People who found the teddy bear creepy should not click this link that points out the next step (first step was Teddy Ruxpin) has been taken.

And just for some extra content, have some relevant Jonathan Coulton:

(Bonus link to a live version of the same song with Neil Gaiman.)

, , ,

No Comments

Capsule Tech: Watson

English is an interesting thing.  It’s probably one of the hardest second languages to pick up, because of the massive numbers of irregularly conjugating verbs and bizarre, at times arbitrary, pronunciations.  And that’s if you’re human.  It can be that hard to pick up even if you already understand the general concepts of irony, sarcasm, puns, subtext, or any of the other things that go into communications and seem so natural to us as humans.

So what do you do when trying to learn English as a second language when your first language is machine code?

I watched all three of the Watson episodes of Jeopardy this week, and what I’ve seen I’m still processing.  It’s something so completely new and different that was on display that it felt like watching some step forward in history.  Will these episodes be our generation’s moon landing television event?  No.  Do I think that the episodes will have some long lasting significance?  Yes.  But when watching them there was something oddly familiar about what I was watching.

See, what makes Watson so amazing is that it’s easy to see it as not amazing.  On one hand it did what three people do five times a week: answer oddly phrase trivia questions that require not just a vast knowledge but also the ability to understand how the show uses word games and puns.  On the other hand it does what we see in science fiction all the time: it seamlessly parsed naturally spoken English and gave the anticipate responses.

But Watson isn’t a human.  And those three episodes were not science fiction.  And it’s oddly necessary to specify those, because what we did see on display was the first step towards a computer acting a little more human, or a little more like the computers that Star Trek has promised us.  The kinds of computers that run the holodeck, and can create complex simulations from just a few simple spoken statements.

Perhaps those will always be fiction, but this is still an amazing step in that direction.

Was Watson perfect?  No.  Any human playing Jeopardy would immediately not even consider Toronto as an answer in a category called “US Cities”.  And its knowledge seemed somewhat lacking when it came to the fine arts.  But these feel like such minor issues compared to how far the Watson team has come in creating a system that is capable of understanding human speech well enough, and responding quickly enough, that it took down the two best players Jeopardy has ever seen.

What was televised in syndication this week was a marvel, and I hope that anyone with the slightest interest in where technology is going to go over the next twenty years was watching closely.  Because what we saw this week?  Was the future.

, , , , , ,

1 Comment

Capsule Tech: Word Lens

Has anyone not seen the Word Lens video yet?  Just in case, this is some awesome stuff:

There’s a new group of apps coming out that aren’t so much augmented reality as replaced reality.  The first that I showed over on Unleaded was a diminished reality app.  This one is a replaced reality app.  Both I find absolutely fascinating in their potential implications, especially as this technology improves.  In both the demo of that DR app, and in a more in-depth review of the Word Lens app, there are clear visual errors.  But expecting perfection out of first proof of concept apps like this is a fool’s game.

What they both represent, however, are potential steps towards a future where one can’t be as certain about what one sees.  Augmented Reality tends to stand out, it’s elements that are clearly not actually there.  These apps, however, look to interrupt reality, change it, then feed it out in a new format.  Right now the obvious line in the sand for telling it’s not real is the requirement to hold up a smart phone and only seeing the altered reality on its screen.

It’ll be interesting to see where this technology can move to.  I suspect the ability to put augmented or altered reality into a pair of glasses, or at least goggles, is only a decade or two off.  And at that point, the line will start to blur as to where reality begins and ends.

, ,

No Comments

Capsule Tech: See Through Displays

Everytime a bit of tech pops up that looks like a step towards the future I’m building in Capsule, I like to post it.  Today, there are two:

Via Gizmodo: Full desk curved displays, though for now it’s just a “clunky” demo.

Via Engadget: Translucent displays.  It’s still a few steps away from Tony Stark’s awesome phone in the most recent Ironman movie, and it’s certainly several steps away from a portable tablet computer that can be entirely transparent, but there’s plenty of time to get that perfected.

, ,

1 Comment

%d bloggers like this: