Monday, December 10, 2007

JBoss Archive Browsing

Today I learned that jboss-archive-browsing.jar is in fact an "slice" of the much bigger jboss-common.jar
and that, if I want to download the code from that project, I have to go here.
(It is important to remember the code in jboss-archive-browsing has an ugly bug that makes it impossible to
use Hibernate if you entities are inside a .jar)

Sunday, December 02, 2007

Complication & Complexity: Not the same thing

Today I learned that Complication & Complexity are not the same thing:

  • Complexity: A problem is complex when you do not know how to solve it, the solution might be simple, or complicated.
  • Complicated: A problem is complicated when you know the exact algorithm that solves it, but the algorithm has lots steps and/or you need to follow lot of well known rules.

Self learning: Knowledge is power

Here I am reading about Decision Making, Software Engineering and Network Security. That is what I love to do... reading... learning... but lately I have not felt inspired... I felt like... like there was nothing worth learning anymore... I guess I was just overwhelmed with work and the stress of life, because I am regaining my will to learn more... always learn more...

I think that maybe what re-started my desire for knowledge was a conversation I had today with my girlfriend... we were talking about how was it possible that most people didn't do problem analysis before trying to solve problems with their work processes... I think it is because they know anything about the real meaning of Six Sigma and how to measure quality in a process, we think it is logical to start analyzing the current situation because we learned that from our university courses on software engineering but... why should anyone else know about it? Most people are pressured at work to "just do something" to solve problem without stopping to determine first what are the root causes for the problem they need to solve, and they end up treating the symptoms instead of the disease, and transform a problem that could have been solved with a single action in to a chronic problem that needs to be handled again, and again and again.

The sad part is that a lot of time the solution for the problem has been known by some people for quite a long time, but the person that is assigned to solve the problem ASAP in a particular place doesn't know that the problem he is dealing with was solved a long time a go, and he/she end up spending an excessive amount of time and energy to discover a partial solution to a problem that would have been easy to solve if he/she had the right information.

My favorite example of this are source code versioning systems... the first one (Source Code Control System (SCCS)) was invented in 1972, but today, 36 years later I still arrive at software departments or even software enterprises that do not use any software for version control... how is that possible if really good versioning systems like SubVersion are opensource and free? and have excellent integration with Windows (TortoiseSVN), I believe it is because of "lack of knowledge", people just doesn't know that there are free and easy software to deal versioning and they end up designing complex proceedings to manually handle the versioning of files... they do it that way because they plain do not know there are better ways...

Monday, October 22, 2007

Law of Conservation of Complexity

In short, the law of conservation of software complexity states that complexity can not be created or destroyed, it can only be changed from one place (or form) to another, such as when:
  • File based persistence is changed for SQL based persistence.
  • An object relational mapper is used to translate objects in to tuples.
  • A remoting framework is used instead of plain TCP/UDP API
  • Any software is programmed in a highter level language
  • An object oriented framework framework is used to build a web app instead of a plain C cgi.
  • An operating system provides a GUI instead of just a command line
In all this cases, it seems as if complexity were reduced, but, in fact, it was only moved to a place where it can not be seen, that, in my opinion it the reason that motivated the creation of encapsulation in object oriented languages, to make it easy to move complexity around, and hide it from some developers to make their work easier (but, in the end, the complexity is still there, and sooner or later you will need the help of the creators of the framework/database/gui... or if is opensource and you have the time and energy, you might have the courage to go and fight directly with that hidden complexity... but the final fact is that complexity is never destroyed, it is just moved around.

Maybe that is why ObjectWeenies and RelationaWeenies and FunctionalWeenies, and all other Weenies just cann't undestand each other... they all have different strategies to deal with complextiy and when one of the thinks that a particular place is the ideal place to hide complexity it turns out that is precisely the place that another one think is not the place where complexity should be dealt with....

Saturday, October 20, 2007

It was programmed with...

Lately I have been seen people at work saying "I built that system using Php" or "We should build all our applications with Java" or "All our applications should be built with Ajax" or "We should (or should not) use Java/J2EE", but in the majority of the cases it turns out that the final product is something that is not built with a single technology but a combination of several ones... and the problems with that show up when we start to integrate applications:
  • Integrating this applications will be easy they both use WebServices (Yes, but one of them uses JSON, another SOAP, another REST and another used Hessian)
  • Lets combine this two web applications in to a single one (Yes, but one of them is built using Spring+Hibernate and the other was built with JDBC+Home Made Wannabe Framework)
  • The architecture of this applications is very similar they are both OLTP applications, integrating their code bases will be easy (or exchanging developers between them will be easy) and it turns out one of them is built using Stored Procedures in PL/SQL, another uses TopLink and the last one uses IBatis.
  • This two applications are AJAX bases it will be easy to integrate them (or exchanging developers between them will be easy) .... ups, they use to completely different and perhaps even incompatible AJAX frameworks

So... are we really saying something that somehow resembles the truth when we say "I built that application with XXXX"? I think not... but then... why do we keep saying stuff like "That was built in Java" if there are 1000 different ways to build it with Java.... 1000 ways to build with AJAX, 1000 ways to build it with PHP, 1000 ways to build it in .NET ... and millions of ways to build it, if we start combining this "base" technologies.

Saturday, October 06, 2007

Ruby is younger therefore better than Java?

I wrote this as a response to From Java to Ruby: Programmer's view, but I couldn't post it because of a bug in that site, so I decided to post it in my own blog:

Isn't this a simplified view of the advantages of Ruby over Java? For example the lack of choice in Ruby means that if the "one Ruby way" to do stuff is not good for you project... you will have to go to other technologies (Java for example). And that can happen pretty often:

  • Hibernate has many more options for integration with legacy databases than Ruby's Active Record... almost all databases now have JDBC drivers (you can not say the same about Ruby database support).
  • Spring offers integrated transaction handling that makes it possible to switch from JDBC transactions to JTA transactions without changing a single line of Java code (you just need to modify around 5 lines in an XML file) what is the equivalent for that in Ruby?
  • Calling stored procedures is not that easy with Ruby... what is the equivalent of HQL (JPAQL) for Ruby... can you honestly say that it can handle all the special cases HQL can... and with the same efficiency?
  • With java I can build a web application GWT style, JSF style or plain JSP style... and each style has advantages and disadvantages... do I have all that power with Rails? (Of course, those frameworks can be re-built in Ruby, but the question is, do I have them now?)

I think you are right when you say that it's healthy to start with a clean slate and rebuild based on a cleaner, simpler foundation, but before saying that the new foundation is actually better than the older one, you have to be sure that your new foundation is actually capable of handling all the special cases the old foundation was capable of handling... or remember that maybe if you remove all the abilities the old foundation has to handle special cases you might end up realizing that your new foundation is just a replica of the state that the old foundation had when it was younger. (And even then the older foundation has the advantage that you can use it as it was used in the past, but you can not use a new foundation as it will be used in the future)

Friday, August 24, 2007

Code depreciation

Depreciation is a term used in accounting, economics and finance with reference to the fact that assets with finite lives lose value over time (from Wikipedia), I this case, I'd like to discuss code depreciation and its relationship with optimization and mantainability by proposing the following rule:

If code is written for performance (compromising maintainability), the value of such performance optimization (and maintenance degradation) will depreciate, because the likehood of having either faster hardware or different developer maintaining the code increases as time passes.

On the other hand, maintaintable code, increases its value for similar reasons:

If code is written for maintainability (postponing performance enhancements), the value of that maintainability (and performace degradation) will increase its value, because the likehood of having either faster hardware or different developer maintaining the code increases as time passes.

I added this definitions to C2... I wonder how (or if they) will evolve.

Wednesday, August 22, 2007

Data Transfer Object Injection

Data Transfer Object injection is a programming error which results in security holes., it is to a Remote Object Service based applications which use object graphs what SQL Injection is to web-based applications which use databases.

DTO injection could happen where there is a remote object service that allows a client system to send and and object graph that is automatically converted by an object relational mapper in to SQL statements.

Instead of sending a valid object graph, the attacker can send a different object graph, representing alterations to the database that go well beyond his security level. For example, a remote object service receives an object graph that represent changes in the objects that represent new users, or new permissions granted for existing users of the system.

To prevent this problem it should be possible to specify at the object relational mapping level, which entities can be saved by the current user... many object relational mappers, or xml relational mappers automatically write the changes represented by the object graph to the database, without caring if the current application user has the privileges required to persist those objects... we can not rely on RDBMS security, because most remote object services use the same user for all the calls... and I think it that connecting with a different user for each remote object service would be bad for connection pooling (decreasing performance)

I wonder if anyone else thinks this is a common security problem... Mmmm... I will add this to C2... I wonder how (or if it) will evolve.

Saturday, August 11, 2007

REST DataService... it was so... obvious?

When I built my first systems using .NET 1.0 (back in the year 2002), I was excited with the idea of using "XML SOAP WebServices" to communicate my client side application with my remote side business logic... but, after I started developing it, I realized I had to do a lot of stuff that just seemed repetitive and hard to use, why did I have to create a WebMethod for each of the CRUD operations... and many for the "R" in CRUD for each possible "select" variation... and it was even more problematic because sometimes I just had to have a "dynamic querying UI" and couldn't find a good way to send the definition of a query in a good "XML" way...
Then I realized... why should I create a method for each variant? why not just have a single web method:

DataSet executeQuery(string Query)

And I started changing all my code, anything that I wanted to obtain from the server could be obtained that way... (but... I started wondering... is that the one and only true way to use data oriented web services? I remember reading somewhere that wasn't such a good idea.. that SOA wasn't invented for that.. after all, that was just a thin XML wrapper over my ADO.NET data provider...)

Fast forward a few years.... a lot of people start talking about a doctoral dissertation written by Roy Fielding... and the reach the following conclusion "SOAP is just too complex" ,"Having to create a different web method for each action makes the interface complex and not that good for inter operation", "one needs to know too much to understand a SOAP web service because methods are not standard", "WSDL is too complex", "SOAP is going against the resource naming philosophy in HTTP", etc, etc.... And REST is the answer to all our problems...

Well here I am taking a look to the experimental REST Framework "Astoria" Microsoft is creating, and... it looks painfully similar to DataSet executeQuery(string Query), but it has a difference... it is not using SQL... it is using a custom ad hoc querying mechanism... that... does the same things SQL does?.. Perhaps it will implement better some relational features that are badly designer in SQL but... what is the real advantage here? what is the real difference between this and a SOAP web services that receives SQL and returns an XML representing rows in a database?

Is there really a difference? or it just that we (as an industry) needed to invent SOAP webservices to realize all we needed was a thin XML wrapper around SQL?

Update: Astoria is now known as WCF DataServices

Saturday, July 28, 2007

Why is validation so hard?

Here I am, trying to validate my persistent business objects before committing a transaction...  since I am using Hibernate... that means they are POJOs...
Hibernate has its validation framework, that allows for validation using Java 5 @Annotations... it is a nice idea... but I don't feel that comfortable validating that way... Annotation based validation is fine for simple validation (not null, min/max size,etc) but is not that good for more complex stuff (validations formulas, stuff like "you can't buy that unless you have money in you account" or "a car has to have 4 wheels or it can not change is status to 'ready to run'").
The problem IMHO with the Hibernate Validator, is that it is triggered on "PreInsert" or "PreUpdate"... and those events are triggered each time a "Flush" is called (automatically or manually) but Flush is called with 2 different purposes, if is called explicitly, it often means "put this in to the database", and, when called automatically often means "put changes in to the database so that I can make queries without risk of inconsistencies" but it doesn't mean "the transaction is committed" (although a lot of people use it with that intention)... now... what if I want to validate only "just before when the transaction is committed", not "on flush"... (that can happen if I want to do complex validation that requires querying the database about its state, taking in consideration the modifications that my uncommitted POJOs will produce when flushed in to the database).

I believe that the main problem of the POJO nature of Hibernate persistence, is that POJOs do not know they are persistent, and therefore do not know that they need to be leave the database in a valid state after being flushed into it... I think Hibernate is missing a mechanism that can be called that does a kind of "fake commit" that applies all the changes to the database, then call a validation api that can check that all the applied changes are valid, and, only after that is verified, allow for the transaction to really commit (and if validation fails, that it should never commit, it should rollback).

In other words, it should be possible to validate stuff "before commit" not "before flushing", and it should be possible to flush 1 or more times before commit without having the validation triggered, since we might need to perform operations with the data flushed in to the database, and only if those operation give a valid result, commit...

The problem... I think, is that Hibernate event system doesn't cover "OnBeforeCommit" and even worse... it lacks a mechanism to inform this OnBeforeCommit of which objects were inserted, updated or deleted. (In fact, Hibernate knows that internally, but it doesn't expose an API to retrieve that information... and therefore transactions are blind to the changes that were flushed before the commit (and that makes it really hard to just call the validation algorithms of those entities that modified inside that particular transaction)

Monday, July 16, 2007

NoResultException is a really stupid idea!

When I saw Query.getSingleResult() I thought, "yes, great idea, I always have to add an utility method like that..."
But then, I met NoResultException... what a great way to screw a great idea!
Why not just return null??!!! getSingleResult should return 1 element, or null if it can not find stuff!

Saturday, July 14, 2007

Unit testing Relational Queries (SQL or HQL or JQL or LINQ...)

Most applications I have built have something to do with a database (I remember that while I was on college I used to think that was not exiting stuff, I used to dream about doing neural networks stuff, logic programming in Prolog, etc) but then I met WebObjects and its Object Relational Mapper (Enterprise Objects Framework) and I got really excited about object oriented data programming... but I always had a problem... to test my object oriented queries I had to "manually" translate them into SQL and test them against the database, and only after they give me what I thought were correct result I would write them using EOF Qualifiers...
Then I met Hibernate's HQL and I realized it is much more powerful than EOF Qualifiers, but I still had to translate it to SQL to test it, I know I can get the SQL that is generated from the HQL from the debug console, and paste it in my favorite SQL editor... but even then if I found a mistake, a lot of times it was easier to tweak it in SQL and the manually translate it HQL.
Currently, there are some extensions for Eclipse (Hibernate Tools) that make this more "direct" but, what if I don't like (or don't want, or can't) use Eclipse... it would be great if someone could, for example, make a plugin for SquirrelSQL, but until then... what options do I have?

Then I learned about unit testing... and the answer came to my mind immediately: I just had to write a unit test for each of my queries. That worked fine... in the beginning... until I started having queries that returned thousands (or millions) of objects, and it wasn't such a good idea to output them to the debug console... and I had another problem... how should I write the "asserts" of query?... and how can I do it so that it doesn't make my test so slow that it becomes unusable? (I can, of course, check the results just by viewing them, but my brain is not that good to say if those 10,000 row really match with the idea I had when I wrote that HQL)

So, I started to look "what do I do" to check if an SQL query is correct, lets say for example, that I write this:

select count(*) from Address,Employee where Address.Id= Employee.AddressId and Employee.Id = 3

(Translated to English: How many address the Employee with Id = 3 has?)

Now... how do I test that? well I could add an assert after getting the result in java (or c#) like this:


But the, what happens if someone deletes the row with Id = 3 from the table Employee? That means my test will fail... or what what if someone deletes all the addresses from employee? and what if I want to test that if there are no addresses for an employee, the answer should be zero...

That is a lot of work just to test if that simple query is right... and I think that work could be done automatically:

Take a look at the SQL, it could be decomposed into:

select count(*) from (select * from Address,Employee where Address.Id= Employee.AddressId and Employee.Id = 3) as EmployeeAddresses

And then we could say, lets automatically check for the case when the resulting set is empty, and for the case when the result is not empty, to check for a case when the result is empty, we need and Employee with Addresses, so we generate:

Select * from Employee where exists(select * from Address where Address.EmployeeId = Employee.Id)

And take the Id first employee we get... and that should give us a non empty set if used in the original sql sentence that we are trying to test... after that, we automatically generate:

Select * from Employee where not exists(select * from Address where Address.EmployeeId = Employee.Id)

And take the Id first employee we get... and that should give us an empty set if used in the original sql sentence that we are trying to test...

I call this queries "inverses" of the original one, it like when one is testing a multiplication, to see if 2 x 3 = 6, just do: 6/3 = 2 and 6/2 = 3, if 2, and 3 match the operands of the multiplication, you multiplication is right. The same thing goes for SQL, one just has to find the way to "invert" it, if I could automate this inversion, the automatically generated queries would help me by telling me things that might not be immediately obvious to me when I look at the original query, and that would help me check if my original query is right.... it would me some kind of "invariants" that would help me to better understand my querying... or maybe I could even write the invariants first, and then create a query and see if it matches my invariants...

Mmmm.... maybe using a select there is another way to "invert" a query to test if it is right, using the actual inverse operation of selecting... that is "inserting", could I derive from:

select count(*) from Address,Employee where Address.Id= Employee.AddressId and Employee.Id = 3

Something like (In pseudocode):

Insert Employee;
Store Employee.Id
Run select count(*) from Address,Employee where Address.Id= Employee.AddressId and Employee.Id = EmployeeId
Assert("The answer should be zero")
Insert Address related to Employee
Run select count(*) from Address,Employee where Address.Id= Employee.AddressId and Employee.Id = EmployeeId
Assert("The answer should be one")

This has the advantage that I don't need a database with data already on it, but it has the disadvantage that takes lot of time to write an unit test like this in java, because to insert an employee, it might be necessary to:

  • Avoid breaking validation rules no related to this particular test, for example, an Employee must be related to a Department, but if the Department table is empty, then I should create a Department or I will not be able to insert an employee.
  • Avoid conflicts with validation rules directly related to this particular test, for example, what if I have an Hibernate interceptor that won't let me insert an address without 1 or more Addresses
The main problem here, I believe, is that f I insert a row leaving a not null column empty most databases won't wait until I try to commit the transaction to say "integrity violation" and rollback my changes... therefore it is impossible to write partial data just for the test that I have in front of me, but... could I automatically generate consistent inserts using as a source just the integrity rules at the database level... and the select that I want to test?
I think it can be done... the question is..
What is the algorithm to generate the inserts needed to satisfy an SQL select statement?

Thursday, July 05, 2007

The perfect infrastructure (framework?) for data systems

The perfect  infrastructure (framework?)  for data systems:

  • Has an object relational query language (something like JQL or LINQ) 
  • Has a database with versioning (like Subversion) so you can always consult the database as it was in a particular moment in time transparently
  • Supports transactions... and distributed transactions. (like Spring)
  • Has a framework to exchange graphs of objects with a remote client, objects can be manipulated freely on the client, filtered and queried without hitting the database without need, and are transparently loaded in to de client without having the n+1 problem. (like an hybrid between Hibernate, Apple's EOF & Carrierwave)
  • Supports "client only transactions" and nested client only transactions (like the Editing Context in WebObject's JavaClient applications) so that it is possible to make rollbacks without hitting the database, and it is even possible to make partial rollbacks... and have savepoint functionality, without going all the way to the database (unless you want to do so)
  • Client objects, server objects and database elements are kept in perfect sync automatically, but it is possible to add logic to a particular tier of the system without too much hassle.
  • Has a validation framework, that make it really easy to write efficient validation code following DRY, and that validates data on the client, on the application server and on the database.
  • Validation code, combined with the versioning capabilities of the infrastructure allows to save information partially, as easily as writing part of a paper, validating only as you completely the information, with multiple integrity levels
  • It is possible to disconnect the client from the server, and it will be able to save your changes until the connection is established again
  • The applications built with this perfect infrastructure auto update automatically.
  • With a very simple configuration tweak, it is possible to download the application "sliced" in pages, or as a complete bundle. This capability is so well integrated that the final user can choose the installation method, and the programmer doesn't even care about this feature.
  • The developer only needs to specify the requirements semi-formally in a language (like Amalgam) and he will receive a running application, that adapts dynamically to specification (unless he chooses to "freeze" a particular feature of the application, in which case, the default procedural code for that feature is automatically generated, and the developer can customize as he wishes ... or decide to un-freeze it.
  • Can be coded in any language compatible with a virtual machine that runs anywhere, or can be compiled to an specific platform.
  • Allows for easy report design... by the developer, or the user.
  • It is opensource (or sharedsource), so that in the extremely unlikely case of needing another feature, or finding a bug, it can be easily fixed by the developer
  • It is freely (as in beer) downloadable from the Internet. (or has a reasonable price)
  • It is fully documented, with lots of examples, going from very simple examples for beginners, to really complex real world applications with best practices for experts
  • Includes the source code with unit-test with 100% coverage of the code
  • Supports design by contract coding (from the database up to the client side).

You know what is the funny (or sad) part of all this? I have met frameworks that do 1, 2 or even 3 or more of this features... but none that does them all... will I ever see such a thing? is even possible to build it?

Tuesday, July 03, 2007

RIAs: Faulting &Uniquing (or Merging?) (Granite, Ajax)

Today I realized that lazy loading support for Granite Data Services is in its infacy... is more like "Partial Loading" (it will load everything not initialized, and not initialized stuff will remain "unloaded" forever).

I am thinking this leads to a pattern like this:
  1. I need to work with persons, so I fetch a list of them from a remote service.
  2. I choose to work with the person with id "3";
  3. present the contacts of person "3". (here is the tricky stuff, all the contacts that I load have a reference to person "3", what do I do about that? do I re fetch it, creating a different object and breaking uniquing, or do I look for a way to prevent that "same but different object" in my application? )
I guess that we will need something like Faulting & Uniquing , and a Client Side EditingContext (or Client Side EntityManger)... to control data in the client side... (our own idea of LDS DataStore ?)

But... until granite has that... what could we do as a first step? it would be nice if we could "merge" a recently obtained object with one a fetched before... something like ADO.NET DataSet... (I can not believe I am writing that I miss the DataSet)

I have been thinking... a fully "AJAX" traditional JavaScript based application would have the same problems if it had a complex enough UI... but I haven't heard of anything like it, it seems that most AJAX application developers build applications so simple that they don't even care about having to write and re-write client side data manipulation code... (or... maybe those applications don't even enough client side behavior to need it?).

I guess that until Granite has his own "Data Management" the way to handle data will be... to imitate the practices of traditional AJAX applications?

(Mmmm, now with Google Gears... will we see how JavaScript based frameworks for automatic handling of DTOs and ORM start to appear everywhere? Parhaps this will revive the interest in something like SDO?)

Monday, June 11, 2007

SOA: Transactional Boundaries, The Paper / Computer Document Impedance Mismatch

Here I am... I again, facing the exact same problem... this is becoming repetitive... I have to build an OLTP system... what is an OLTP system?

Online Transaction Processing (or OLTP) is a class of programs that facilitate and manage transaction-oriented applications, typically for data entry and retrieval transaction processing.

And right there, in the wikipedia article about OLTP, it is possible to read about the Paper Computer Impedance Mismatch (I here I was feeling like I discovered something) "The term Online Transaction Processing is somewhat ambiguous: some understand "transaction" as a reference to computer or database transactions, while others (such as the Transaction Processing Performance Council) define it in terms of business or commercial transactions."

Well, IMO, that is just a part (a very important part) of the P/C impedance mismatch... when we talk about a transaction oriented computer system... what are we talking about? about computer SQL X/Open DTP transactions? or about business transactions...? Some people believe it is the same thing... Some other people have never realized the difference between paper and computer documents... and ask... why was I able to do X in paper but now it is invalid in the computer? To explain it.... lets go back to the origins...before computer documents.... lets say you have to register a fine against someone (someone committed a mistake against the law, and now they have to pay some money as punishment). So, you start writing down the document "a fine of 8 gold coins for... " suddenly, you feel and urge to go to the latrine.... you stand up and run.... you have left the paper document incomplete... it doesn't say what was the mistake that originated it... it doesn't say who has to pay the 8 gold coins, it doesn't say by which authority is that person obliged to pay... but there is nothing to worry about... is already in ink over the paper... it doesn't matter if you have a real bad digestion problem and you can't continue for 2 months... when you return to the paper sheet, it will still say "a fine of 8 gold coins for...". that is persistence... exactly the same persistence used if you were saving the fine in modern AJAX based system... but if you were using that modern system, when you returned after 2 months, you would find that "your session has timed out" and you have lost the amount you written in the "amount" field (or maybe someone else had to use the computer, and they closed your account, or had to use the plug... and unplugged your computer...). So, with paper, you have to "actively" want to revert persistence (by destroying the piece of paper) but, with electronic persistence, loosing information is a lot easier, just close the current window without hitting save... and it is lost... and it can get lost even "after" hitting save... because... now... we have created a new enemy for persistence... we have "data integrity"... we have more problems than when we only had to fight for paper to write stuff down...

Why do I see integrity as an enemy of persistence? it is pretty simple, really... with the new computer based OLTP systems, it is impossible to save "data without integrity" (corrupted data?), so, unless you know all the facts precisely, you can't write something down... (it doesn't matter that you know the fine is about selling a controlled substance, that it was issued by the DEA, and signed by "John Smith", if you don't know who has to pay it (Jane Doe for example) every time you hit the "save" button, the system is going to answer, in a very nice and polite way (if well designed) "I can not save that fine, because the "First Name" and "Last Name" of the person that has to pay for it are mandatory fields". Well, you say... just let me register the information I have "now", and after I get the missing data, I will return and add it, but the system wont hear your begging... and it will not save your data until it has "integrity"... but before you had to use this new software system you were able to write down this information in a piece of paper... it didn't matter if you had the full name of the person or you didn't, those people on the IT department I going to hear about you,  and they will have to change that stubborn attitude...

Well, it turns out, sometimes integrity is needed... what can you do with a fine without a name? suppose you have to manage another 100,000 similar documents... and that you don't know the name of the person that has to pay for half of them... and on some other thousands you don't know how much you have to pay... or why... and on others you don't even know any of this stuff (you just know someone has to pay something because of some unknown law, and that by adding all the fines in February, you will earn 50,000 dollars, because federal government told you that). Now you are in trouble, you have to start defining how much is the minimal information to describe a fine... you have to draw a line between "useful information with integrity" and "vague corrupted stuff" or you will start loosing track of what is happening with every document (what makes a document a document, how much can you alter it before it becomes another document).

Suddenly you realize you can ask the computer to classify the document, and create two lists (that is one of the things computer do really well with structured information), so you ask the computer to create a list with "complete integral documents" and another with "incomplete documents", and you say "problem solved, I have 100,000 documents, and of those, 45,000 have integrity (full documents), all others are work in progress" but, after few days... you realize now you only have 44,985 full documents... someone has been erasing the data in the documents because he got a bribe... and, unless you have a backup from previous week, you cant know which documents were corrupted... so, now it turns out that an already "integral" document, can go back and become "corrupted" real easy... and you cant even know that in those cases where the information was incomplete from the beginning, the source of the problem was that the original information was incomplete... or just that the person that has to write it on to the system is doing in an incomplete form... intentionally...

Thursday, May 31, 2007

WebBrowser + Embedded WebServer + Embedded DataBase = Google Gears


Today I found out about a new Google project, Google Gears... a new browser plugin... that adds an SQL database and a local, only for that browser on that machine "Web Server" (oh, and an external WorkerPool for threaded asynchronous proceseses)...

So... now the that WebBrowser has an SQL database... a Worker Pool ... and a WebServer... it can run disconnected applications... you can save you emails locally... or your blog entries... or your RSS (I believe that is what google reader does)... WebApplications are now... Desktop applications... (or RIAs as they are called now).

So... now... what is the real advantage of  a RIAG (a RIA with "Google Gears") vs a Desktop App? Well, lets look at its features.. the RIAG... is slower (interpreted)... needs a plugin like Flash to do  real graphical stuff... it can't access anywhere on disk  (we could say it has its own SQL based filesystem)... therefore it is still not better for graphically intensive applications (I don't see a Photoshop or 3dStudio killer in the near future) ...  but could be a nice idea for desktop like stuff (for example a disconnected mail reader, or perhaps even a disconnected wiki). But wait... we already have disconnected mail readers...  (well, but they are not multiplatform.... mmmm... wait, Thunderbird IS multiplatform... and of course we have Java to create those multiplaform mail readers if we need to do so)... okay, but we can create a multiplatform Office like system (yes, a revolutionary idea... wait... what about OpenOffice?) and of course building an Office in a technology like JavaScript will make it really fast in standard hardware (like the very successful Java Office built by Corel a few years ago... wait... never heard of it? mmm, maybe it wasn't that successful... I wonder if that was because Java was really slow on hardware back then... )

Of course... none of that is going to stop Google Gears... people are just hypnotized with building stuff in the "web way" (even if can be done easier on the Desktop)... the way I see it.. with all this stuff, as the "thin client" of the WebBrowser becomes a "rich client" it is also gaining weight, becoming fat, becoming a fat client... so... by this logic... adding a plugin to all available browsers... it is better than a Java applet... but I can't find a logical reason for that... the new RIAs are just applications that use the browsers as the platform.... the  difference with windows applications? that there are many different browsers following the HTML/JavaScript standard, and only 1 windows (of course every browser follows the standard on its own particular way)... the difference with Java? (there isn't, but RIAs are slower... and sliced in pages... that seem to be faster to download... but in fact they consume even more bandwidth than classic fat clients with their proprietary binary protocols ), perhaps the key here is the "openness" of HTML & XML and JSON as protocols for communication (but that can also be done in Java, or in .NET & Mono)

So...  I just don't get it... what it is so great about adding a database plugin to the browser? by following this path all that we are doing is reinventing the wheel (everything that can already be done outside the browser is being re-built inside it... until RIAs become as Fat as Fat-Clients... and then someone else invents the new Thin-Client... and the story repeats again).

I guess the software industry is really, really iterative... we need to go back to an re-try stuff from the previous iteration... to realize it wasn't such a bad idea... enhance that idea... and from there, realize that the idea from 2 iterations ago, was the solution for the drawbacks of our current problems...

Wednesday, May 09, 2007

Project OpenJFX

Java counterattacks? The other day I posted that Silverlight and Flash might be going to kill Java... well Java is figting back:

JavaFX is a new family of Sun products based on Java technology and targeted at the high impact, rich content market.

JavaFX Script is a highly productive scripting language that enables content developers to create rich media and content for deployment on Java environments. JavaFX Script is a declarative, statically typed programming language. It has first-class functions, declarative syntax, list-comprehensions, and incremental dependency-based evaluation. It can make direct calls to Java APIs that are on the platform. Since JavaFX Script is statically typed, it has the same code structuring, reuse, and encapsulation features (such as packages, classes, inheritance, and separate compilation and deployment units) that make it possible to create and maintain very large programs using Java technology. See the FAQ for more information.

I am very impressed with the demos in the site, and the way less verbose way to describe interfaces (when compared with traditional Java Swing code, and I am thinking it could even be a threath for XAML & XML, some people on the net believe that XML is the poor man's parser, and that it is being overutilized to create stuff that should be implemented as an specific language... well, JavaFX is not XML... is this the start of a new trend?), I was also very exited to see how easy is to add animation to Java 2D application with this new API (everything that can be done with Flash will be possible... and maybe even more...). Now... the question are:

  • Will Sun release a "UI Designers Pack" for Netbeans that will be pretty much something like Microsoft Expressions for Java?
  • Could OpenJFX be adopted by projects like OpenLaszlo?
  • Is using JavaScript like languages the new trend?
  • Will JSON stuff become the new poor man's parser?

Thursday, May 03, 2007

Eclipse... Is NOT an IDE

Okay... have been trying to use Eclipse 3.2 like an IDE all week... that failed miserably...

  • VisualStudio.NET is an IDE
  • Borland Developer Studio is an IDE
  • NetBeans is an IDE
  • IntelliJ is an IDE
  • FlexBuilder (an Eclipse plugin) is an IDE
  • JBuilder (an Eclipse plugin) is an IDE

But... Eclipse... Eclipse... is a PE... "Plug Environment", NOT an IDE: Integrated Development Environment.

After you add JDT to it can be considered an IDE... if you only build Console tools (command line applications) but, if you want to build anything more complex than that... then  JDT  is a very limited IDE.

Yes, you can add lots of plugins to Eclipse... and make it become JBuilder... (like Borland did or as Macromedia did with FlexBuilder), but the thing is, that it is JBuilder (the plugin) the thing that IS the IDE, Eclipse is just the PLATFORM for the IDE..., saying that Eclipse is an IDE it is like saying that Windows is word processor... or graphic design application... or why not, Windows is a IDE! (Of course, that is crazy... well saying that Eclipse is an IDE is crazy... comparing it to any real IDE is crazy...) Eclipse is a PLATFORM, and you can build an IDE on top of that, but, the quality (and INTEGRATION) of the free available plugins in Eclipse Callipso,  in my opinion is not enough to call it an IDE

Netbeans is a great IDE, the best OpenSource IDE for Java for Swing or Web or J2EE applications, Eclipse is NOT and IDE. Period.

(I guess this is my first rant in a blog)

Monday, April 16, 2007

Swing: Dying between Silverlight & Flash?

So... Now Microsoft has Silverlight and tools like Expressions to create really good looking animations and User Interfaces... and a really small 1Mbyte plugin that works in Windows & Mac OS X...

Adobe has Flash... and Flash CS3 & Flex to create really good looking animations and User Interfaces.... and a really small around 1 Mbyte plugin... that works... well... everywhere (Windows, Mac OS X, and yes, Linux)

And Java... well... has Swing and SWT... neither of them has a a tool too easily create really good looking animations and User Interfaces... (Mattise is not bad, but it doesn't compare with Flash CS3 or Expressions), the JRE is huge, Swing and SWT have better integration with current platform UI than ever before... (but, creating really good looking UIs, like those possible with Flash & Silverlight with just the help of a designer... well.. it is just not possible)

So...the Java vs .NET war.... is now the Silverlight vs Flash war? or now we have 3 powers?

Thursday, April 12, 2007

Open Source Design Studio for...

Eclipse & Netbeans are built on Java, a good part (AFAIK not all) of Visual Studio is built on .NET, so why not...
It is a lightweight design studio. It is not a replacement for a full Eclipse IDE, but instead is a lightweight tool that allows easy development of applications and allows you to dive into the platform at an affordable price... but... its name says it all....
Just... exactly... what I need for Laszlo...
The universe is not user friendly...

Perhaps for Laszlo 4.5?

The Presentation Layer: Open Laszlo vs Flex


So I have been evaluation several presentation layer frameworks, trying to choose one for may future applications at my new job... my boss is very interested in developing applications with a "cinematic" experience... so, the main contenders are:

  • Open Laszlo 4
  • Flex 2

So far, the main advantage of each one are:

Open Laszlo 4:

  • OpenSource (& free both ways)
  • Flash & JScript UI generation
  • Cinematic user experience

Flex 2:

  • Flash UI generation
  • Free SDK (but AFAIK not OpenSource)
  • Cinematic user experience
  • UI Builder
  • Interactive debugger
  • Syntax Colored Editor for the Scripts
  • Lots of Beginner to Advanced Tutorials
  • Lots of Books
  • Lots of examples with matching tutorials & books
  • Cairngorm architectural framework
  • E4X Support
  • Based on ECMAScript 4

The main disadvantages of Laszlo are Flex strength (and viceversa):

Open Laszlo 4:

  • No UI Builder
  • No interactive debugger
  • No syntax coloring for the scripts
  • Lack of advanced tutorials online
  • Lack of Books (There are only 2, one from the reviews I have read seems to be pretty much a copy of Laszlo reference documentation, and the other I think will be a really good one, but it is unfinished)
  • Lack on architectural frameworks or guidance (nothing like Cairngorm is available)
  • No E4X support
  • Based on ECMAScript 3 (older version)

Flex 2:

  1. Not OpenSource
  2. DataServices are expensive (but maybe the Granite project will change that)
  3. The UI Builder needs lots of RAM (1 Gbyte is recommended by Adobe)
  4. The UI Builder is an Eclipse plugin (this is a personal disadvantage, because I prefer Netbeans)

I really wanted Laszlo to win this competition ( I just really like the product, the idea, and I believe competition is good for customers, so I feel that keeping Laszlo alive will be good for the future of both Laszlo & Flex consumers...) but... so far, it seems that Laszlo main advantage as an opensource project (community & books support) is just not as advanced as Flex's (no good finished Laszlo books, no list of best practices, architectural guidance or architectural framework) so, I think the winner, for me, will have to be, for now, Flex... (but I hope that next year Laszlo improves, and I hope I have the chance to use it in future projects)

My wish list for Open Laszlo 4.5:

  • Lots of beginner to expert tutorials of full applications (with real world authentication & authorization & architecture best practices)
  • Books, books, books!
  • IDE agnostic UI Builder (something like what is available for TIBCO)
  • Architectural Framework (something like Flex's Cairngorm but for Laszlo)
  • J2EE Integration (something like Flex's Granite project)
  • Interactive IDE integrated script debugger (even if only for Eclipse)

My wish list for Open Laszlo 5.0:

  • E4X support
  • ECMAScript 4 support
  • Something like GWT of Echo2 but with Laszlo as the underlying infrastructure. (Java only coding)

Well, now I'll just wait and see...

Friday, March 30, 2007

Life is a Wicked Problem

Yesterday I was reading about studying online (in the OpenUniversity), I was gladly surprised to see that some of their courses are now available on line for free, and, that you can even remix the content using a free software for mind maps built bye the Compendium Institute, the funny thing is that I ended up finding information wicked problems (and currently I have one, choosing the java frameworks we will use at my current job)... wicked problems are those that (from Wikipedia):
  1. There is no definitive formulation of a wicked problem
  2. Wicked problems have no stopping rule
  3. Solutions to wicked problems are not true-or-false, but good-or-bad
  4. There is no immediate and no ultimate test of a solution to a wicked problem
  5. Every solution to a wicked problem is a "one-shot operation"; because there is no opportunity to learn by trial-and-error, every attempt counts significantly
  6. Wicked problems do not have an enumerable (or an exhaustively describable) set of potential solutions, nor is there a well-described set of permissible operations that may be incorporated into the plan
  7. Every wicked problem is essentially unique
  8. Every wicked problem can be considered to be a symptom of another problem
  9. The existence of a discrepancy representing a wicked problem can be explained in numerous ways. The choice of explanation determines the nature of the problem's resolution
  10. The planner has no right to be wrong (Planners are liable for the consequences of the actions they generate)
And it turns out that:
  • Software development is a wicked problem (and that is my job)
  • Life is a wicked problem (and... well, I am alive)
One thing I specially liked was a document explaining in a very detailed how wicked problems happen (read it here), and it turns out that a really good way to explain it is with a graphic, like this one.

Now.. that image can be produced, by thinking that each person participating on the solving of a wicked problem has his own "pendulum" and moves from "solution thinking" to "problem thinking" on his own, not synchronized way... if find this funny... because a few days ago I was seeing Kent Beck's presentation "Ease at Work", where he describes how a lot of software developers feel (one day we are super wizards... the next we are crap.... the next we are wizards again), but we are not hired and fired in perfect sync with how we feel (sometimes we are even treated as wizards while inside we feel like losers and vice versa), and in his presentation Kent Beck says that software development is not only about programmers, that is about people, people interacting to get a problem solved (you can't have a software business without software developers, but you also can't have it without managers, even if it just a "change of hat"), so, this got me thinking... that the problem Kent Beck is describing, is precisely, a Wicked Problem. What do you think?

Friday, February 23, 2007

OO Principles

Hi! Today I am reading the book Head First Object-Oriented Analysis and Design, I specially liked the OO Principles, I have already read about them in c2, but I really like the way they are summarized in this book:
  • OCP: Classes should be open for extension, but closed for modification. (to avoid new requirements breaking old tested code)
  • DRY: Don't repeat yourself, avoid duplicate code by abstracting out things that are common and placing them in a single location. (to avoid having to fix the same thing in different places, or solving the same problema again, and again and again... I believe this is releated to YAGNI balances OAOO.... i guess I could say that YAGNI balances DRY, and therefore DRY equals OAOO)
  • SRP: Single responsability principle, every object in your system should hava a single responsibility, and all the object's services shoul be focused on carry in out that single responsability. (to avoid Big Ball of Mud, a huge object that does everything, that is hard to extend, hart to understand, and lots of stupid little object that do nothing, mmmm, this reminds me of a pricinciple I read about a long time ago about balancing the intelligence between your objects)
  • LSP: Subclasses should be suitable for their base clases (to avoid confusing code, on which you believe you can use a member of class hierarchy, and after trying you realize thet the result is not the expected one)

Thursday, February 22, 2007

OpenLaszlo & Spring & Hibernate

Okey, changed my job again, time to rest from .NET and to refresh my Java abilities... my goal for this week is to find an easy way to teach everyone at my new job how to build a great application using: Hibernate + Spring + OpenLazlo...
My Java abilities are a little rusty so this is going to be a really entertaining challenge..

Here are some links I have found about this:

Seems that finding information on Laszlo is harder than I thought...

Friday, January 12, 2007

When will that feature in your system will be finished?

That seems like a very simple question: "When will feature "X" be finished?", specially if do not have a lot of experience being the manager of a software development team... but the problem... is that the task of developing a feature has, as many things in software development different faces... its one more example of The Blind Men and the Elephant.

In this case.. when is the feature "finished"?:

  • After the Developer says it is finished?
  • After the Architect says the code is maintainable and extensible?
  • After the Tester says that the feature is bug free?
  • After the User says that the feature meets his expectations?
  • All of the above?
  • None of the above? Then... when: ?

It is easy to see that I believe it is not a question easy to answer... why do believe that?

Well in my opinion the main difficulty to answer it is that that software development is not a linear activity, with a beginning and an end, but an iterative activity, if the Architect doesn't believe the the code is maintainable and extensible, then the code will have to be modified (either by him or by the Developer) and after that, if the Tester finds bugs, the code will have to be modified again to fix the bugs... but those modifications might introduce more bugs to the code, so the code will have to be tested again... its a cyclical process, it has to be done many times before reaching the end, in other words, it is not an "if" answer, but a "while not" answer.

Therefore: effort in trying to reach the end can be registered... but it is impossible to tell how far from the end we are... maybe one, maybe ten, maybe a hundred iterations away.

Friday, January 05, 2007

Is Linq really such a good idea? Are SQL Strings inside C# really such a bad idea?

Today, a friend at work asked me a question "how to sort in a stored procedure?" Of course there is a simple answer:

CREATE PROCEDURE [dbo].[Sorter]()




But... what if you want to order by any field of the table? That is a "dynamic order by"... well then:

CREATE PROCEDURE [dbo].[Sorter](@SortOrder tinyint = NULL


SELECT Field1, Field2, Field3 FROM Customers ORDER BY CASE WHEN @SortOrder = 1 THEN Field1 WHEN @SortOrder = 2 THEN Field2 ELSE Field3 END

Great!... well not that much, because it only works as expected if all 3 fields are of the same type (of course we could convert them all to string... I mean "nchar")

My friend has all this problems because he is trying to avoid the dynamic slq approach (either creating his own dynamic sql generator, or using any of the available ORMs)

But this post is about LINQ... with LINQ, we will have relational extensions right there inside C#... that means no more language mixing (SQL inside strings inside C#)... but that also means... no more dynamic manipulation of SQL (AFAIK C# can not manipulate C# as a string, they way it does with SQL)... So... will LINQ really simplify development of data manipulation applications? or will it complicate it more by preventing us from easly and dinamically manipulate queries ? or is this just an SQL limitation (maybe SQL should allow as to have parametrized sorting?)?

I guess I'll have to do more research...

Wednesday, January 03, 2007

Handling Relative & Absolute URLs with System.Web.VirtualPathUtility

Until today, I translated between relative & absolute paths in ASP.NET "by hand". Example:


private string ToAbsolute(string url)
if (url.StartsWith("~"))

return (HttpContext.Current.Request.ApplicationPath +
url.Substring(1)).Replace("//", "/");
return url;


But now, it turns out the with ASP.NET 2.0... I can achieve this effect... by simply calling VirtualPathUtility:




Much simpler... don't you think?