Jonathan's Pancheria

dotcom Thousandaire

Click here for all of my del.icio.us bookmarks.

Published on 12/12/2005 at 03:43AM .

0 comments

Click here for all of my del.icio.us bookmarks.

Published on 12/12/2005 at 03:42AM .

0 comments

Earlier, I posted about how not forcing all access to web services to go through objects that were serialized into and back out of XML but instead were XML documents that were designed to stand on their own made it easier to implement both web services and web services clients.

Elliotte Rusty Harold sets out a nice short example of the nature of the problem. He summarizes the problem nicely in this quote

“don’t “help” users out by changing XML into something else, especially not objects. Don’t assume you know what they’re going to want to do with the data. Give them the XML and let them use the classes and objects that fit their needs, not the ones that fit your needs. XML is exchangeable. Objects are not."

Again, for the record: if I work with your web service, you do not know what data, data structures, or code artifacts I will have in place to access your web service. Please don’t try to guess by forcing me through your view of how the software artifacts should look. Let me figure out how to make the raw message, and how to interpret what you send back. SOAP toolkits that closely map XML to particular language-specific data structure constructs and back make this hard.

Published on 09/12/2005 at 07:47PM under . Tags , , , , , , , ,

0 comments

Found this quote today from Eve Maler
bq. The trend in distributed computing is towards service-oriented architectures (SOAs). Early in the life of this buzzword, some people said it should really be called a document-oriented architecture (except for the unpleasant acronym :-) because it’s really all about document-passing rather than a tightly coupled RPC paradigm, that is, if you want to be successful at reusing the components

The quote is part of a longer discussion that is partly tangential, but I have spent the last 4 years dealing with integration of services that are or could be called “web services” and they have taken various approaches to how the XML data is moved. Some have expected that we will use a SOAP toolkit that hides the XML behind objects that get serialized and de-serialized. Others came out of a more EDI-like world and are more message or data structure passing oriented.

The ones that are easier to deal with are the ones that move messages or data structures, not insist that I hide behind the de-serialized objects. In general the objects end up not serving my purposes well, and require large recompiles and test cycles to deal with. We end up having impedance mismatch between the software artifacts in the web services we consume, expressed in objects that I must use but which I did not define, and my own software artifacts. To fix this, I need to either write wedges that sit between the web service I consume and my software artifacts and do the mapping, or I need to build my software artifacts so that they have a “has-a” relationship with the web service’s artifacts. That just splits the wedge into per-object pieces where each of my objects that “has-a” needs to manage its map. The really intractable problem is when the object mapping that one toolkit makes is incompatible with another’s.

On the other hand, when someone changes an XML document that I can treat as an XML document, I can work in a script-based (not compiled) environment and update the mapping between the document and my software artifact the way I need to, and ship faster. I have variance between what you are sending me and what I expect, but since my entire system is built solely as a mapper between my artifacts and the web service’s documents, there is no impedance mismatch, just a version change to my mapper.

It’s faster to update, easier to maintain, and I don’t have to care about whether your web service’s underlying object model is any good or not. I have one system, the mapper between documents and my software artifacts, not 2 systems: your objects and my wrapping layer to handle the impedance mismatch. And I cannot end up in situations where my toolkit and yours cannot generate compatible object<→message bindings.

At the end of the day, I think you have to care about what’s in the angle brackets. Developers who just want to deal with objects see a false economy: you feel like you are working higher up the protocol stack because you do not observe the wire format—just your objects. Eventually, however, you end up working below the wire format because the underlying plumbing of the service you are talking to is exposed in the software artifacts (objects) you end up having to manipulate on your side. You either have to know details about how the objects on the other side were built, or you have to muck with the document format anyway to map away details from the other side you don’t care about or cannot handle, but usually both.

Published on 03/11/2005 at 07:08PM under . Tags , , , , , , , , ,

0 comments

To list the schemas in a postgresql database:

select nspname from pg_namespace;

Published on 01/11/2005 at 05:20AM under . Tags , , ,

0 comments

I wanted a nice convenient way to have multiple Rails apps share my single postgresql database but keep all the apps’ tables/procs/views/other artifacts isolated from each other. I came up with the following solution, which I can’t claim is necessarily original, but I did not find documentation that laid out the solution all in one place. I currently have the typo blog you are reading this article on up using this solution.

The trick is to use multiple Postgresql schemas (part of Postgresql since 7.4) to separate each Rails app’s tables etc. into their own schema space Then, every time you want to load up a new Rails app, or really any postgresql-based application, you just create a new schema. You could in theory even put your development, testing, and production databases for a single Rails app into separate schemas in one database.

Documentation on Postgresql schemas, and more information about what the recipe below is doing with schema commands is here. Rails has had support for leveraging Postgresql schemas since version 0.11.0 as mentioned here.
h3. Steps to create a schema in a postgresql database and configure a Rails app to use the schema

1. Create the new schema in your database:
bq. CREATE SCHEMA myrailsapp\g

Replace myrailsapp with whatever name you want for your schema in the SQL above and everywhere else following.

2. If you are installing an existing Rails app that has a SQL script to generate the tables etc., then insert the following as the first line of the schema file, otherwise skip to step 4:
bq. SET search_path TO myrailsapp;

Postgresql uses the first schema in the schema path as the default location to look for tables/procs/triggers/etc if the name is unspecified (see this discussion discussion in the Postgresql docs for more details).

3. Execute the SQL script with the change from step 2. If you go into psql and issue the following commands, you should see the objects that your script created:


SET search_path TO myrailsapp\g

\d

4. Edit your database.yml file and add the following to the appropriate section(s):
bq. schema_search_path: myrailsapp

So your database.yml section for production would look like this if you wanted to use the myrailsapp schema in the mydbname database:


production:
  adapter: postgresql
  database: mydbname
  #whatever other postgresql config options you require
  schema_search_path: myrailsapp

That schema_search_path: statement in your database.yml file tells Rails to set the schema search path to look only in the myrailsapp schema for unqualified database object names. That means that without changing anything about how you write your code, your Rails app now has all its artifacts in a schema inside your database.

Enjoy!

Jonathan

Published on 01/11/2005 at 04:27AM under . Tags , , ,

0 comments

NACK the GAC

Chris Sells goes over the very few reasons why the GAC should be employed. We’ve run into most of the issues here at Outtask:

The one where I can only come up with two reasons for using the GAC

, the first being very difficult to pull off correctly and the second to happen more and more rarely as we move to SOA and .NET.


This post feels very much like “Why do we still need duals?” so if you’ve got a reason for using the GAC that I didn’t list, by all means, let me know!

[Marquee de Sells: Chris’s insight outlet]

This smells to me like more MS technology that’s been hyped to be the greatest thing since sliced bread but really only solves a small set of problems. Sort of like MTS/COM+. Other than its support for really good distributed transaction processing with no coding, MTS/COM+ buys you nothing except giving you a toolkit for solving a bunch of thorny COM thread isolation/system reliability problems that are really Microsoft design problems you shouldn’t have had to care about to begin with.

Having said that, the built-in distributed transaction handling really is nice. You can get really far with it before you have to branch out and write any sort of custom distributed transaction code. But that is only a fraction of what MS was promising in 1998-1999 that COM+ was going to do for you.

Published on 11/03/2004 at 08:22PM under . Tags , ,

0 comments

Powered by Typo – Thème Frédéric de Villamil | Photo L. Lemos