Building full delegation in OAuth – This time in English

OAuth enables a very simple type of delegation, a user can delegate permissions between two services that they have accounts on. In other words, OAuth lets a user delegate permission to themself. But full delegation allows arbitrary users of arbitrary services to give permissions to each other. In this article I summarize the two key extensions to OAuth needed to enable it to do full delegation. The first is ’on behalf of’ (e.g. a service saying ”I am making this request on behalf of user X”) and the second is a very simple directory service. The rest of the article tries to use something like plain English to explain how these features could work in OAuth. Continue reading Building full delegation in OAuth – This time in English

Thoughts on updating finger services

Having a finger service as a directory to find information about users and services appears to be absolutely necessary if ad-hoc information sharing between people and services is to be possible. But just having a way to finger a person or service is less than 1/2 the battle. The real challenge is making it possible for services to update their user’s finger information in an ad-hoc manner. I explore the issues around dynamic finger update in this article. Continue reading Thoughts on updating finger services

Using OAuth WRAP and Finger for ad-hoc user authentication

The OpenID community has worked long and hard to make ad-hoc logins possible on the web. Part of that process has been experiments with a number of different technologies and approaches. Below I make my own proposal for how to handle ad-hoc logins on the Internet using OAuth WRAP and my own spin on Finger. I offer this up as food for thought. Continue reading Using OAuth WRAP and Finger for ad-hoc user authentication

Thoughts on building a finger service

Those folks of a certain age will remember the finger command/protocol which allowed one to look up information about a person based just on their login identifier. This command was extremely useful even if it had some troubling security and privacy implications. Efforts are underway to create a Web Finger but for reasons I’ve previously discussed I think the underlying technologies for those efforts are sub-optimal. So in this article I propose what I think is a much simpler approach. My motivation for caring is that I think having a finger service will make permissioning systems much more useful (see here and here). Continue reading Thoughts on building a finger service

The outline of a profile for granting permissions using OAuth WRAP

In a previous article I talked about adding a profile to OAuth WRAP that would enable users to ask for or grant permissions to each other. In this article I show that an OAuth WRAP profile to handle granting permissions only needs two request/response pairs. I then show that an OAuth WRAP profile to handle asking for permissions only needs one additional exchange. Continue reading The outline of a profile for granting permissions using OAuth WRAP

Open permissions matter for an open web

The key to an open social web is permissions. There is data we don’t want to share and data we do want to share, permissions let us create the appropriate barriers. Closed networks like Facebook have reasonably rich permission infrastructures but what about open networks? How should Google and Microsoft enable document sharing across Google Docs and Sharepoint Online? Sure WebDAV can handle the actual mechanics of listing out documents, editing, etc. But how do the permissions get put into place in an open manner directly between users of the two services? This is a hole in the standards infrastructure and it’s time to fill it. Continue reading Open permissions matter for an open web

The CAP theorem and modern data centers – for now, choose consistency!

The dominance of the commodity machine model for data centers is so complete that one forgets that there was ever any other viable choice. But IBM, for one, is still selling lots of mainframes. Nevertheless the world I live in is built on top of data centers that contain a lot of commodity class machines. These machines have a nasty habit of failing on a fairly regular basis. So when I think about the CAP theorem I think about it in the context of a data center filled with a bunch of not completely reliable boxes.

In that case partition tolerance (which, as I explain below, ends up meaning tolerance of machine failure) is a requirement. So in designing frameworks for the data centers I work with the CAP theorem makes me choose between exactly two choices - do I want consistency or availability?

My belief is that for the vast majority of developers, at least for the immediate future, they need to choose consistency.

Continue reading The CAP theorem and modern data centers – for now, choose consistency!

A mac fail? Please Help me with remote desktop

I want to get a mac laptop for my wife but I want to be able to use it as a remote terminal for my iMac upstairs.

There doesn’t appear to be a decent solution for this problem on the mac. VNC is a joke. It will just take my 24 inch iMac screen and shrink it down to the laptop’s screen size. And yes I have played around with smart zoom but it’s really painful.

Isn’t there an equivalent for the mac to Microsoft’s outstanding Remote Desktop Connection application and RDP protocol?

For what it’s worth I signed up to be notified when AquaConnect releases their mac remote desktop product which is based on RDP but they aren’t even announcing dates.

Any ideas or am I just out of luck?

Recovering from self inflicted data corruption – a summary

Of late I have been torturing myself about the question of - even if I build on top of a highly reliable storage service like Windows Azure Table Service do I still need to worry about backups, versioning, journals and such? The answer would seem to be, yes, I do. Mostly because even if the table store works perfectly, I’m still going to have bugs I introduced that are going to hork my data.

In fact what I specifically need to do is:

  1. Lobby the Windows Azure Table Storage team to add undelete for tables so if I accidentally blow away one of my tables I have some hope (oh and ACL’s would be nice too)
  2. Be very careful about how I update my schemas
  3. Implement a command journal (and be clear about their limitations)
  4. If time permits implement tombstoning
  5. If I’m feeling really wacko implement my own versioning system on top of the table store (or just backups if I’m feeling only slightly wacko)
  6. Put into place a realistic plan to take advantage of all these features while keeping in mind the limitations of these techniques.

The links in the previous text are to the other articles in this series that I wrote for my blog. Those articles are:

Implementing Versioning in Windows Azure Table Store

In a previous article I argued that I needed some kind of journaling/backup for my Windows Azure Tables in order to handle my own screw ups. In this article I re-examine the value of versioning for recovering from self inflicted data corruption. Discuss backups as a possible substitute for versioning. Look at what versioning might look like if added as a native feature of Windows Azure Table Store and finish up by proposing a design that would let me implement versioning on top of Windows Azure Table Store.

This article is part of a series. Click here to see summary and complete list of articles in the series.

Continue reading Implementing Versioning in Windows Azure Table Store