Thoughts on building a finger service

Those folks of a certain age will remember the finger command/protocol which allowed one to look up information about a person based just on their login identifier. This command was extremely useful even if it had some troubling security and privacy implications. Efforts are underway to create a Web Finger but for reasons I’ve previously discussed I think the underlying technologies for those efforts are sub-optimal. So in this article I propose what I think is a much simpler approach. My motivation for caring is that I think having a finger service will make permissioning systems much more useful (see here and here). Continue reading Thoughts on building a finger service

The outline of a profile for granting permissions using OAuth WRAP

In a previous article I talked about adding a profile to OAuth WRAP that would enable users to ask for or grant permissions to each other. In this article I show that an OAuth WRAP profile to handle granting permissions only needs two request/response pairs. I then show that an OAuth WRAP profile to handle asking for permissions only needs one additional exchange. Continue reading The outline of a profile for granting permissions using OAuth WRAP

Open permissions matter for an open web

The key to an open social web is permissions. There is data we don’t want to share and data we do want to share, permissions let us create the appropriate barriers. Closed networks like Facebook have reasonably rich permission infrastructures but what about open networks? How should Google and Microsoft enable document sharing across Google Docs and Sharepoint Online? Sure WebDAV can handle the actual mechanics of listing out documents, editing, etc. But how do the permissions get put into place in an open manner directly between users of the two services? This is a hole in the standards infrastructure and it’s time to fill it. Continue reading Open permissions matter for an open web

The CAP theorem and modern data centers – for now, choose consistency!

The dominance of the commodity machine model for data centers is so complete that one forgets that there was ever any other viable choice. But IBM, for one, is still selling lots of mainframes. Nevertheless the world I live in is built on top of data centers that contain a lot of commodity class machines. These machines have a nasty habit of failing on a fairly regular basis. So when I think about the CAP theorem I think about it in the context of a data center filled with a bunch of not completely reliable boxes.

In that case partition tolerance (which, as I explain below, ends up meaning tolerance of machine failure) is a requirement. So in designing frameworks for the data centers I work with the CAP theorem makes me choose between exactly two choices - do I want consistency or availability?

My belief is that for the vast majority of developers, at least for the immediate future, they need to choose consistency.

Continue reading The CAP theorem and modern data centers – for now, choose consistency!

A mac fail? Please Help me with remote desktop

I want to get a mac laptop for my wife but I want to be able to use it as a remote terminal for my iMac upstairs.

There doesn’t appear to be a decent solution for this problem on the mac. VNC is a joke. It will just take my 24 inch iMac screen and shrink it down to the laptop’s screen size. And yes I have played around with smart zoom but it’s really painful.

Isn’t there an equivalent for the mac to Microsoft’s outstanding Remote Desktop Connection application and RDP protocol?

For what it’s worth I signed up to be notified when AquaConnect releases their mac remote desktop product which is based on RDP but they aren’t even announcing dates.

Any ideas or am I just out of luck?

Recovering from self inflicted data corruption – a summary

Of late I have been torturing myself about the question of - even if I build on top of a highly reliable storage service like Windows Azure Table Service do I still need to worry about backups, versioning, journals and such? The answer would seem to be, yes, I do. Mostly because even if the table store works perfectly, I’m still going to have bugs I introduced that are going to hork my data.

In fact what I specifically need to do is:

  1. Lobby the Windows Azure Table Storage team to add undelete for tables so if I accidentally blow away one of my tables I have some hope (oh and ACL’s would be nice too)
  2. Be very careful about how I update my schemas
  3. Implement a command journal (and be clear about their limitations)
  4. If time permits implement tombstoning
  5. If I’m feeling really wacko implement my own versioning system on top of the table store (or just backups if I’m feeling only slightly wacko)
  6. Put into place a realistic plan to take advantage of all these features while keeping in mind the limitations of these techniques.

The links in the previous text are to the other articles in this series that I wrote for my blog. Those articles are:

Implementing Versioning in Windows Azure Table Store

In a previous article I argued that I needed some kind of journaling/backup for my Windows Azure Tables in order to handle my own screw ups. In this article I re-examine the value of versioning for recovering from self inflicted data corruption. Discuss backups as a possible substitute for versioning. Look at what versioning might look like if added as a native feature of Windows Azure Table Store and finish up by proposing a design that would let me implement versioning on top of Windows Azure Table Store.

This article is part of a series. Click here to see summary and complete list of articles in the series.

Continue reading Implementing Versioning in Windows Azure Table Store

The limits of recovering from application logic failures

I have been blathering on all week about how to prepare for application logic failures in services and how to potentially recover from the damage those errors cause. I have yammered on about command journals (twice), tombstones, versioning etc. But none of these techniques is magical. They all have very serious limits that mean in most non-trivial cases the best one can really do is say to the user ”Here is the command I screwed up, here are the specific mistakes made, here is what the values should have been, do you want to repair this damage?” Below I explore three specific examples of those limits that I call: read syndrome, put syndrome and e-tag effect.

This article is part of a series. Click here to see summary and complete list of articles in the series.

Continue reading The limits of recovering from application logic failures

Tombstoning on top of Windows Azure Table Store

After command journaling probably the next most effective protection against application logic errors is tombstoning (keeping a copy of the last version of a deleted row). In this article I propose a design for adding tombstoning to Windows Azure Table Store using two tables, a main table and a tombstone table.

This article is part of a series. Click here to see summary and complete list of articles in the series.

Continue reading Tombstoning on top of Windows Azure Table Store

Thoughts on implementing a command journal

I had previously concluded that command journaling (creating a journal of all the external user commands and internal maintenance commands I issue) is really useful for recovering from self inflicted data corruption. In this article I look into the various techniques I can use to implement a command journal so as to trade off between system performance and the journal’s utility in recovery.

This article is part of a series. Click here to see summary and complete list of articles in the series.

Continue reading Thoughts on implementing a command journal