Friday, October 31, 2008

Conceptual Queries extension for JPAQL: JPAQL queries are too sensitive to model changes

When a query is written in JPAQL, it still depends to much on the domain model (I think JPAQL could be extended to support Conceptual Queries).

For example, lets say you want to:

Find each white cat that lives in a house with doors made of wood

A cat is of one color, it lives in one house, a house can have 1 door and the door can be made of 1 material

So, the query in JPAQL looks like this:

select c from Cat c where c.color=”white” and c.house.door.material.name = "wood"

But then, lets say our customer changes the requirements:

Find each white cat that lives in a house with doors made of wood

A cat is of one color, it lives in many houses, a each house has many doors and the door is made of one or more materials (for example half glass, half wood)

Since our relationships are now @OneToMany so we write their names in plural, and we are newbies in JPAQL we try doing this (and it, of course does not work):

select c from Cat c Cat c where c.color=”white” and c.houses.doors.materials.name = "wood"

Now, we can of course solve this problem using subqueries and the exists keyword, but that makes the resulting query way more complex, and even if the above worked, it still is a different query, but, in our english example, the query didn’t change:

Find each white cat that lives in a house with doors made of wood

So, why we can not write in JPAQL something like:

select c from Cat c where c with color= “white” and lives in House has Doors made of Material with name = "wood"

That way the query wouldn’t have to change even if the multiplicity of the relationships changed. Of course now the question is, from where do I get the “with”,“lives in”, “has”, “made of” and “with” well, simple:

  • The with operator is available for each non @Entity member of the class (for strings, integers, etc).
  • For relationships with other entities we just add the conceptual name of the relationship name as an attribute in the @OneToMany or @ManyToOne annotations:

Example:

Before:

public class Cat{

@Column

private String color;

@ManyToOne(conceptualName=”lives in ”)

private House houses;

}

After:

public class Cat{

@Column

private String color;

@OneToMany(mappedBy=”cat”, conceptualName=”lives in ”)

private Set<House> houses;

}

This way changes in the object model do not necessarily affect our queries. What do you think? Is this a good solution?

Tuesday, October 21, 2008

Process Isolation: JavaScript is getting it before Java!

So, with Google Chrome, JavaScript is finally getting process isolation (even IE 8 will work in a Loosely Coupled Internet Explorer (LCIE) mode that will give the JavaScript applications running there real OS level process isolation) and Java? well, Java still does not have a anything like that.

Sunday, August 24, 2008

The Scientific Method and Test Driven Development

According to wikipedia the Scientific Method:

Scientific method refers to a body of techniques for investigating phenomena, acquiring new knowledge, or correcting and integrating previous knowledge. To be termed scientific, a method of inquiry must be based on gathering observable, empirical and measurable evidence subject to specific principles of reasoning.[1] A scientific method consists of the collection of data through observation and experimentation, and the formulation and testing of hypotheses.[2] Although procedures vary from one field of inquiry to another, identifiable features distinguish scientific inquiry from other methodologies of knowledge. Scientific researchers propose hypotheses as explanations of phenomena, and design experimental studies to test these hypotheses.

and Test Driven Development:

Test-Driven Development (TDD) is a software development technique consisting of short iterations where new test cases covering the desired improvement or new functionality are written first, then the production code necessary to pass the tests is implemented, and finally the software is refactored to accommodate changes. The availability of tests before actual development ensures rapid feedback after any change. Practitioners emphasize that test-driven development is a method of designing software, not merely a method of testing.

Can you see the parallels? Test are like experiments for the universe of predicates that describe your system requirements.

Thursday, August 21, 2008

JConsole Config

Here is an excellent guide (written by Mike Schouten of Componative) on using and configuring  JConsole:

I found Part 2 specially interesting, because it explained that JConsole uses  2 ports to connect to a remote JVM, and only one of them is configurable the other one is random, the way to detect it is to create a logging.properties file that looks something like this:


handlers = java.util.logging.ConsoleHandler
.level = INFO

java.util.logging.ConsoleHandler.level = FINEST
java.util.logging.ConsoleHandler.formatter = \
java.util.logging.SimpleFormatter

// Use FINER or FINEST for javax.management.remote.level - FINEST is
// very verbose...
javax.management.level = FINEST
javax.management.remote.level = FINER


Now we can start JConsole with the following command to enable detailed logging of the javax.management classes:



jconsole -J-Djava.util.logging.config.file=logging.properties



Connecting to the remote JVM will now result in the opening of a separate output window containing the detailed logging, in that output it is now possible to see the random port:



image

Saturday, August 16, 2008

J2EE Application Servers and Eclipse plug-ins: Like going back in time

All this Application Server development is funny... it is like going back to 16-bit Windows (remember, back then,  if a single application crashed, everything went down with it). It is the same with this modern application servers, if one application goes crazy (infinite loop, very long and slow process, memory leak, etc), it will take down the application server, and there is no way around that that I know of (Same thing with Eclipse, if one plug-in goes crazy, everything goes down) it is funny (and sad at the same time) because so much time and resources were wasted trying to give processes proper isolation at the OS level, and now we are using technologies that make that effort irrelevant.

Perhaps, one day there will be a Multi-tasking Virtual Machine (MVM) that solves this problem by providing an efficient and scalable implementation of and infrastructure  for multiple, isolated tasks, enabling the co-location of multiple server instances in a single MVM process. Such a technology would also enable the restructuring of a J2EE server implementation as a collection of isolated components, offering increased flexibility and reliability. But in the mean time, we are left with application servers where that are inherently unsafe. And that is really bad, specially for development and prototyping(lots of time waiting for the shared components in a J2EE application server to reload if the application in development crashed it), but also for production (it is really risky to deploy a new system in to a J2EE, if it crashes, it will take down all the other applications). And the same things happen to “integrated” solutions like, Eclipse, or more general technologies, like OSGi, if a particular plug-in or service crashes, it will crash the entire application.

But the future for the MVM doesn’t look so bright, the JSR 121: Application Isolation API Specification looks abandoned since 2005… Perhaps for the JDK 7? or 8? I hate when I see this kind of project die from indifference… Perhaps now, with the Open JDK, someone will find the way to implement it.

This is getting repetitive…

Software development, that is… it is getting repetitive:

  • Design the database (the database schema represents facts, but those facts will be incomplete)
  • Code the business logic (Queries, queries, queries, in uncomfortable semi-standard SQL, there will be problems with:
    • performance (inefficient queries, lack of indexes, degradation because cascade updates or deletes or because of integrity verification)
    • concurrence (overwritten data, death locks, inaccurate reads)
    • code maintenance( SQL is ugly, cumbersome, hard to refactor, hard to modularize)
    • ORM is either inefficient with rich domain model, or efficient and with an anemic domain model
  • Code de User Interface
    • If is a heavy client, it will be ugly (because people erroneously think that only web based applications can be pretty)
    • If it is built in HTML as they current hype dictates it will have browser compatibility problems
    • It will not map easily to the database model  / domain model (specially report like stuff).
    • If it is a web application, it will use convoluted and not type safe communication between the different screens (server side navigation) or, it will use lots of JavaScript and essentially will a a one page application, and therefore it will become :
    • In heavy clients all navigation will be deceptively simple and type safe, but that will lead to spaghetti like dependencies between the screens.
  • Test
    • Security concerns will be left to the end, and most queries will need recoding because what data can be seen is always dependant on the privileges of the users.
    • Queries will return the wrong data when they are cross compared (a summary report like screen will say that earnings were of the day were $1352, but if you sum the amount of each of the sales, it will be $1456, or $1265, or something else. The most common reason? a status field that you forgot to take in account and that indicates that, for example, one of the sales was canceled, or one of the sales was in a foreign country money, or you forgot to take in consideration a discount.
    • Dependant UI controls (like chained comboxes) when used in an unexpected order will fail (or, if used to save data, might fail when editing that data).
    • Some controls that are only comfortable to use if the list of rows displayed in them is short (less than around 25 items), will have list of more than 250  elements, sometimes even thousands of elements, and will need to be replaced by search screens.
    • Nobody will test the performance of the applications for the large amount of concurrent users that will use the system in productions.
  • Production
    • Some tables, with supposedly immutable data, will need CRUD functionality, since there is no time to create the CRUD screens, data will be inserted directly using POSQL (Plain Old SQL) , and sooner or later someone will have to edit or delete some data, make a mistake with the WHERE condition, and destroy a large amount of rows. The last backup will be a lot of weeks old, and data will need to be recaptured.
    • Since there were no performance tests, the processor will go 100% and RAM will not be enough, the server will be incredibly slow, or plain crash.

Wednesday, August 13, 2008

Navigation: No hard dependencies, no type dependencies, no code dependencies, Why?

In Java, first we had servlets, and each one of them answered to request made to a particular url (or an url matching a particular expression), but it was hard to create pages that way, and it was even harder to create a system composed of many “pages” (note the conceptual jump from servlets answering url requests to "pages”)

Then, JSPs were created, take a look at them and you will see that each JSP page knows nothing about any other, JSP was designed thinking that the best way to deal with many pages was to handle it them as “independent”: lets make each page unaware of what page is next (and what page was previously). Just as servlets were somehow unaware or each other.

After that, JSF was invented, this new technology sees "Pages" as components (composed of other simpler components) forming a component tree. All the components in the same page collaborate to give life to a JSF page (but it does it in a pretty limited way, unless you use Facelets, but they weren't invented until later, and they didn't help much with navigation). By now, we should be far far away from the servlets with which we started... but still each page is unaware of what page is next (and what page was previously). Just as servlets were somehow unaware or each other.

Why?

Then, we got Seam, with its powerful bijection, to help us deal with the job of exchanging information between pages. And its pages.xml files, to help us deal with the limited navigational capabilities available with plain JSF. But still each page is unaware of what page is next (and what page was previously). Just as servlets were somehow unaware or each other.

Why?

In a somewhat parallel universe, perhaps in the Twilight Zone, WebObjects was created, in WebObjects, we have no servlets, "Pages" are components (composed of other simpler components) forming a component tree. All the components in the same page collaborate to give life to a WO page. And if you want, you can treat a WO page as a WO Component, they are pretty much the same thing, at it is trivial to change from one to the other, and create libraries of WO Components (comparatively creating JSF components is so hard that is mind boggling). And each page is aware of what page is next (and what page was previously). Let me state that again: The code in each page is aware of what page is next (and what page was previously). Here is a pseudo code example:

public WOComponent gotoProfilePage(){

ProfilePage profilePage = new ProfilePage();

profilePage.setUser(this.getSelectedUser());
return profilePage;

}


And that's it, that is how you send the currently selected user from the current page into the next page, plain and simple, you just create an object of the type of the page where you want to go, and use its getters and setters go share state with it... what could be simpler than that? why I have been unable to find a framework that deals with page navigation like this? Well, you might say, it is not that flexible, what if you want configurable navigation? Well, then you can do it like this:

public WOComponent gotoProfilePage(){

WOComponent profilePage = this.getPageWithName("ProfilePage");

profilePage.takeValueForKey(this.getSelectedUser(),"user");
return profilePage;

}


Any page with the name "ProfilePage" that has a "user" attribute will used for this navigation action. So... the question is still in the air... why? why I have been unable to find a web framework that deals with navigation in this way... why? We have frameworks that create dependencies between otherwise only dynamically related stuff, like Hibernate, creating properties in java objects that mimic the dynamic relations in a database, but we refuse to offer the same services for page navigation... I don't get why... this seems as such a natural way to deal with this kind of problem... is there a disadvantage here that I am not seeing? Or was WebObjects really created in the Twilight Zone?





Update: Just discovered another disadvantage of xml based navigation in JSF/Seam: It is not possible to use the java debugger or logging frameworks "out of the box" to debug the conditionals in the xml file, one has to wait for (or build) a custom debugger/logger for JSF/Seam navigation.

Wednesday, July 30, 2008

Spring vs J2EE Application Servers

Lately I have been playing with Oc4j, Tomcat, Glassfish and JBoss, before that, I had only played with Spring and Tomcat (but Tomcat was irrelevant, it was there only to start Spring) and now I have realized why Spring is much more comfortable than JEE, even with all the enhancements of JEE5: Spring is “inside” your application, for example, if you want to connect to Oracle with Spring, you add the JDBC jars to your application, and then configure your datasource in the applicationContext.xml file inside your application, if you want to use log4dbc and slf4j to debug the SQL generated by Hibernate (BTW Hibernate .jars are also inside your application) you can do it, and you only have to modify, again, the applicationContext.xml file inside your application, if you want to add support for Spring transaction handling, or you want to expose some of your beans as CeltiXFire Webservices, you describe all that: exactly, you guessed correctly: inside your application. You really don’t care about the stuff outside of your application.

With JEE is the other way around, you have to worry about stuff outside of your application (for example, avoiding name conflicts in the JDBC datasources registered in JNDI), you have to worry about installing the Oracle .jars in Tomcat, Glassfish or JBoss (and worry about conflicts if the version installed as by default in Oc4j (or as shared in Tomcat) is not the one you want to use). You have to worry about compatibility with the way WebServices are handled and configured by your particular application server, etc.

And finally, when you want to test stuff, you have to start the all the huge application server, even if you just want to test if your Login page is working correctly and if after logging in, the right permissions were loaded in the the current session.

I think that is pretty much why JEE still feels uncomfortable when compared with Spring, JEE it is still about the application server, not about my application: and I really don’t care about the JEE application server, all I care about it is my application. I think that JEE6 (or perhaps JEE7) should focus in finding the way to take the attention back in to the application, perhaps by forcing the JEE application servers to offer their services the way Spring does it, from inside the application, and only if you want it, as a second, hard to configure option, make those services shared.

Sunday, April 20, 2008

The Two Laws Of Dimensional Ontology

The Two Laws Of Dimensional Ontology

  • Law I
    • "One and the same phenomenon projected out of its own dimension into dimensions lower than its own is depicted in such a way that the individual pictures contradict one another."
  • Law II
    • "Different phenomena projected out of their own dimension into one dimension lower than their own are depicted in such a manner that the pictures are ambiguous."
I think this is two laws somehow explain what happens when we try to use a model to describe stuff in the real world, we end up representing things inaccurately and incompletely... the funny things that sometimes we seem to somehow change our angle of perspective, and then we realize that the our model is wrong... and try to fix it, and after that we believe that now it is right, but we will never have it right because the map is not the territory. Perhaps a similar problem arises when we try to use ObjectRelationalMapping, we simply lose some of the information because of the perspective we choose to use... Object Orientation forces a particular structure it may not exists... and Relational... trying to see things from a neutral point of view end up not being helpful enough: it seems that humans tend to need to see things from a particular hierarchical perspective at a time, and find it hard to grasp things from all points of view at the same time.

I started to think about this, because as a software engineer I am sometimes asked to create documentation about the software I am building (you know, the typical use cases stuff), and when you have to write a document of the kind a lot of times, the company where you work typically already have some document templates... now, typical document templates have a hierarchical (object oriented?) structure:
  • UseCases
    • UseCase1
      • Scenario 1
      • Scenario 2
      • Actors that participate in the use case
      • Classes (sterotyped as boundary in RobustnessAnalysis) that participate in the use case
      • Classes (sterotyped as control in RobustnessAnalysis) that participate in the use case
      • Classes (sterotyped as entities in RobustnessAnalysis) that participate in the use case
      • ActivityDiagram?
But then, they realize that some times, (for example to decide what uses cases will be affected by the modification of an entity class) they also need a document like this:
  • Classes
      • Classes (stereotyped as boundary in RobustnessAnalysis) that participate in the use case
        • Class 1
          • Use case 1
          • Use case 2
          • Actors interacting with the class
        • Class 2
          • etc,etc.
Both documents have kind of the same internal model, but represent (view it?) form a different perspective, now, the strange thing, is that I do not know of a commecially successful desktop publishing and/or word processing application that allows you to specify such a model so that you can describe your system once and only once and produce documents with several perspectives (I know products like Adobe FrameMaker are supposed to help you with this kind of semantical document structure, but, since they are (AFAIK) internally based on hierarchies, and not relational theory, they do not really allow the user to use all the power of the relational manipulation of data) I think that the problem might be that humans usually organize their ideas following some kind of hierarchical structure... or at least thats the way we are thought to do it at school, perhaps because of the limitations of printed material (a book made of paper can not be queried and reshaped relationally). But, now that we have all this electronic power to help us, we still try to organize stuff in hierarchies (or in messy graphs) and not relationally (internet is a messy graph, no a relational database... am I wrong? XHTML/XML are hierarchical, nobody would accuse them of being relational). We simply find it easier to organize stuff in that way... is it really because that is the natural way in which we think? or is it because we still do not have good enough relational tools to deal with knowledge? is relationally actually about creating a model that is valid from any perspective? or it is more about easily changing the perspective? (But if we build those model from particular perspective, how can we expect relational to produce a model that is valid for more than one perspective?) And finally, why we do not have any Relational Word Processor/ Relational Document Standard

Tuesday, April 15, 2008

Business Case (Is it a good idea to modify an Activity?)

Today, we also covered Business Cases, it is an interesting subject, it basically tells you to compare projects using the following criteria:
  • Total Activity Disfunctionality
  • Total Activity Impact
  • Total Activity Factibility
If Impact and Factibility and Disfunctionality are high, then it is a good idea to go ahead with the project, on the other hand, if, for example Factibility is low, then, it would be impossible to really build the project, or if the Impact level is very low, then it might be easy to do it (hi factibility) but i will not help the organization much, so, it will not be a good idea to waste time building the project. If Disfunctionality is low, that means that the activity actually works really good, so, typically the impact of your changes will not be high.
(Mmmm, I see some kind of relationship between Disfunctionality and Impact)

The project triangle

In project management a very important concept is that of the project triangle: Scope, Resources, and Time, it is impossible to have more than two of them.
I have had problems with this triangle.... always, typically because there is not time, and there is no money (Resources) and the Scope, well the Scope always increases.
The main problem in my experience is that not a lot of people has heard of this triangle, and those that do, say, well, it is a nice idea, but the reality of this project is that we need this done with cheaply, for tomorrow, and with unexperience personnel.
When the projects is "Tipically Succesful" it is because it is not too late, or hasn't exceeded the budget too much... or the scope was not too insuficient.

The problem is that everybody in software developerment needs to realize that it is not possible to control more than 2 of this factors. For example:

If you have to do everything in a rush, scope will suffer, and if scope doesn't suffer, then costs will increase...

Perhaps somebody (I?) should create an interactive version of this triangle... one that, when changed 2 factors tells you the implications in the third.

I am starting to think that the project triangle is somewhat similar to radar charts....

Monday, April 14, 2008

Areas of Oportunity

  1. Take advantage of Eventum/Bugzilla and use it to feed Construx Extimate (Metrics, Estimation)
  2. Take advantage of Construx Estimate to feed DotProject in a more accurate way (Metrics, Estimation)
  3. Use the curse as a fundation to justify the use of configuration management tools
  4. Give courses to increase technical level.
  5. Implemente a processs lifecycle for the development of each feature (covering configuration management, architecture, etc) that will help to make estimates more accurate.

Software Engineering the State of the Art

Today I started my course on project management, and my instructor gave me this very interesting link: Software Engineering the State of the Art

Sunday, March 30, 2008

Alphora Dataphor is OpenSource!

Now that AlphoraDataphor is opensource, I believe a relational reborn could come in the near future... I have been reading the manuals and, while I am an object weenie let me tell you all that I think Dataphor is a very exciting technology I hope it finds the way to make True relational database a buzz word like Ruby and that finally forces big companies like Oracle to make databases truly relational.

Thursday, March 13, 2008

Short projects, not short team lifespan

Today, I had an epiphany, I realized that while project must be short, teams should last a lot of time.

Thursday, February 14, 2008

It is infinitesimal, and Done is the Limit

Today, I was reading some articles in InfoQ,,, one of them called my attention:

InfoQ: Does "Done" Mean "Shippable"?

It made remember a blog post a wrote months ago titled "When will that feature in your system be finished"
and here I am wondering... why if this kind of problem is so common... why if it seems to be happening to everyone...
why we don't have some kind of manual to help ourselves to explain others that saying something "is done" has a very
different meaning for different people... for different situations,,,, and, maybe the right answer is that it is never done...

A good system should be able to evolve.... to get new features, to make the current ones more robust...to improve...
I should never be "done" in an static way... it should be able to grow organically... it comes closer and closer to being "done" but it never gets really done... (we could even argue a "done" system is an almost dead system, because it would be a system that can not improve anymore)

Maybe the problem is that reaching the "done" state in software development is like trying to reach a limit in math, we get close, and closer, but if we do things right, we never reach the done state, because we always have things to add, or things to improve.



Thursday, January 24, 2008

How to find files containing a string in Unix

For that, you can write:

find {directory} -type f | xargs grep -l "{stringToFind}"

This is really useful... specially to find out how certain things work (configuration files, like for example
those used inside eclipse are really harder to find without this command)

Requirements Analysis: Negative Space

A while ago, I was part of a team working on a crucial project. We were confident, relying heavily on our detailed plans and clear-cut requi...