Showing posts with label ajax. Show all posts
Showing posts with label ajax. Show all posts

Tuesday, September 08, 2009

Simplicity and Hubris: Web vs Enterprise Development

I would find it really funny that most people do not understand the difference between intelligent and control interfaces if would not mean that lots of time and money are wasted trying to make intelligent interfaces work as control interfaces, or Web 2.0 Internet solutions work for Enterprise intranet problems.

It is the typical "since Google/Apple/Facebook/Twitter/SomeFreeAndPublicWebSite uses it for X/Y/Z/Q it should be great for the system I am building" syndrome.

Well, it might be great for them when they are trying to solve the X/Y/Z/Q problem, but as it turns out, many developers do not work at places that build the kind of systems this companies build, and are not trying to solve the kind of problems this companies are trying to solve.

Some people think that since our problems do not seem as technologically  impressive as those targeted by free public websites they must be easier, or that since a solution worked for the super huge public web site X then it should also work for the tiny intranet system we are building, well, the truth is that most of the times this solutions do not help at all. Please don't get me wrong, some of their ideas are usable, but we need to remember that in the dreaded Intranet Enterprise development world rules are very different:

  • We do not have huge hardware resources at our disposal
  • We do not have a huge budget and the hiring power to get the best and only the best experts to work with us
  • Our systems are not of the"just register in this form this once and after this use the site with the mouse" kind, our users hate the mouse, because they are going to be capturing data all day, and for that, the keyboard is king (and the mouse is irrelevant)
  • We need to interact directly with "special hardware" (such as scanners, printers, finger print readers etc) and browsers do not know how to talk to those, a browser can not even print with precision without help from Acrobat Reader.
  • We need the behavior of our systems to be consistent and always the same (Web Search engine users do not care if the search results present different information or the same information in different order as long as the "relevant" stuff is in the first pages, Enterprise users expect search results to be exactly the same as last time, unless they have done something to alter that, and when they do, the change they expect is predictable)
  • Our users want access to their data now, and do not care for excuses like "sorry but there is no access to the super massively great cloud because our infinitum connection/cable modem/whatever is failing"
  • Our users want data to be confidential (although they do not even understand the meaning of security and its costs). Having all your data "in the cloud" sounds great, until the country where the clouds exists decides that it wants to apply "economic sanctions" to yours and begins by forbidding you access to the data in the cloud because the company that owns it is controlled by their laws. The day I see Google store its internal mission critical strategic information in Amazon's servers or vice-versa is the day I will believe that the Cloud is a safe place to store that kind of information.

All those "Web 2.0" companies have done little to help this stuff (In fact, they have created lots of problems, by forcing us intranet Enterprise developers to use primitive runtime platforms (web browsers) to deliver our applications . It is not that they are evil, it is just that they are not targeting the kind of problems Enterprise developers need to solve. And I hate when I see people recommending approaches that worked fine for this companies for problems that just can not be solved with them.

Thursday, April 23, 2009

Client side caching: Typical omission in server side component models?

It seems like a simple problem, but it is not (to day, I have not been able to find a way to do this without complex Javascript coding):

  1. You have chained comboboxes: County and State.
  2. You select USA in the Country combobox, and its 50 States are loaded in the States Combo (roundtrip to the server to fetch them)
  3. You select Mexico in the Country combobox, and its 32 States are loaded in the States Combo (roundtrip to the server to fetch them)
  4. Now you select USA in the Country combobox again... how do I tell the server side component framework that I do not want it to go to the server for them, since it went for them the last time I selected USA, I want it to use that as a cache and do not go for them until I tell it to do so?

I provided this use case as just one illustration of a broader class of client side caching scenarios, support for this kind of scenario might not be need for all use cases, I might not be needed, for example, for user self-registration... sadly, I do not build that kind of application where I work now, the kind of application I have to build is the kind where the UI is used repeatedly: Yes, I build those dreaded "enterprise level" applications used internally by a big organization.

This kind of optimization might seem silly for the typical web application where the user rarely uses the same form more than once, but for enterprise applications, this behavior can be the difference between an application that is perceived to be responsive and useful, and an application that is perceived to be cumbersome and useless.

This can not be done with “user code” in AribaWeb. It can not be done in JSF, and it can not be done in ASP.NET. But is extremely easy to do if you code in JavaScript and use ExtJS, or Cappucchino.

I wonder... Is this problem really impossible to solve in a declarative way using server side component models? is this really the insurmountable frontier for server based frameworks? or could someone create a trick that made this work?

Saturday, April 18, 2009

Inversion of re-render (subscription based re-rendering): Why it can not be this way?

Anyone that has used Richfaces knows that to rerender something, one needs to refer to it by id.

Now, this is (in my opinion) an useful approach but also a very limited one, specially if componentization and code reuse are important goals during development

Lets say I build a page (lets call it root page), that has a subview with a modalPanel, that includes a facelet component, that has a subview with modalPanel that includes another facelet component, and, when something is done here, I one another control, in the root page to be rerendered

Now, I could of course pass along the ids of the component I need to be re-rendered, but... what if I need another component to be re-rendered too? do I pass along its id too? and what if the other component is also inside (a different) subview with a modalPanel, that includes a different facelet component... then all this id "passing" gets really messy, and creates dependencies in components that could otherwise be decoupled... and to make things worse, using meaningful ids in JSF is not considered a good practice, because meaningful ids (specially if used in name container like the subviews) rapidly increase the size of the pages (because they concatenate to ids of all the contained controls), contributing to bandwidth waste .

Now, I have a proposal, what if re-rendering were to work in an "inversed" way: instead of "A" saying that it will re-render "B", we say that "B" will be re-rendered when something (maybe "A") says so by broadcasting event "C".

This would mean that "A" no longer needs to know the name of "B" and "B" wouldn’t need to know the name of "A" either, it would only need to know that it should re-render itself it something, somewhere broadcasted the event "C"

Am I making sense here? Is this doable? Or am I not seeing a major limitation in JSF technology that prevents this from being built? (I am no expert in JSF, so I really can not say, but I do know that an event subscription based re-rendering engine would be a really nice tool to have)

Saturday, October 20, 2007

It was programmed with...

Lately I have been seen people at work saying "I built that system using Php" or "We should build all our applications with Java" or "All our applications should be built with Ajax" or "We should (or should not) use Java/J2EE", but in the majority of the cases it turns out that the final product is something that is not built with a single technology but a combination of several ones... and the problems with that show up when we start to integrate applications:
  • Integrating this applications will be easy they both use WebServices (Yes, but one of them uses JSON, another SOAP, another REST and another used Hessian)
  • Lets combine this two web applications in to a single one (Yes, but one of them is built using Spring+Hibernate and the other was built with JDBC+Home Made Wannabe Framework)
  • The architecture of this applications is very similar they are both OLTP applications, integrating their code bases will be easy (or exchanging developers between them will be easy) and it turns out one of them is built using Stored Procedures in PL/SQL, another uses TopLink and the last one uses IBatis.
  • This two applications are AJAX bases it will be easy to integrate them (or exchanging developers between them will be easy) .... ups, they use to completely different and perhaps even incompatible AJAX frameworks

So... are we really saying something that somehow resembles the truth when we say "I built that application with XXXX"? I think not... but then... why do we keep saying stuff like "That was built in Java" if there are 1000 different ways to build it with Java.... 1000 ways to build with AJAX, 1000 ways to build it with PHP, 1000 ways to build it in .NET ... and millions of ways to build it, if we start combining this "base" technologies.

Tuesday, July 03, 2007

RIAs: Faulting &Uniquing (or Merging?) (Granite, Ajax)

Today I realized that lazy loading support for Granite Data Services is in its infacy... is more like "Partial Loading" (it will load everything not initialized, and not initialized stuff will remain "unloaded" forever).

I am thinking this leads to a pattern like this:
  1. I need to work with persons, so I fetch a list of them from a remote service.
  2. I choose to work with the person with id "3";
  3. present the contacts of person "3". (here is the tricky stuff, all the contacts that I load have a reference to person "3", what do I do about that? do I re fetch it, creating a different object and breaking uniquing, or do I look for a way to prevent that "same but different object" in my application? )
I guess that we will need something like Faulting & Uniquing , and a Client Side EditingContext (or Client Side EntityManger)... to control data in the client side... (our own idea of LDS DataStore ?)

But... until granite has that... what could we do as a first step? it would be nice if we could "merge" a recently obtained object with one a fetched before... something like ADO.NET DataSet... (I can not believe I am writing that I miss the DataSet)

I have been thinking... a fully "AJAX" traditional JavaScript based application would have the same problems if it had a complex enough UI... but I haven't heard of anything like it, it seems that most AJAX application developers build applications so simple that they don't even care about having to write and re-write client side data manipulation code... (or... maybe those applications don't even enough client side behavior to need it?).

I guess that until Granite has his own "Data Management" the way to handle data will be... to imitate the practices of traditional AJAX applications?

(Mmmm, now with Google Gears... will we see how JavaScript based frameworks for automatic handling of DTOs and ORM start to appear everywhere? Parhaps this will revive the interest in something like SDO?)

Thursday, May 31, 2007

WebBrowser + Embedded WebServer + Embedded DataBase = Google Gears

Hi!

Today I found out about a new Google project, Google Gears... a new browser plugin... that adds an SQL database and a local, only for that browser on that machine "Web Server" (oh, and an external WorkerPool for threaded asynchronous proceseses)...

So... now the that WebBrowser has an SQL database... a Worker Pool ... and a WebServer... it can run disconnected applications... you can save you emails locally... or your blog entries... or your RSS (I believe that is what google reader does)... WebApplications are now... Desktop applications... (or RIAs as they are called now).

So... now... what is the real advantage of  a RIAG (a RIA with "Google Gears") vs a Desktop App? Well, lets look at its features.. the RIAG... is slower (interpreted)... needs a plugin like Flash to do  real graphical stuff... it can't access anywhere on disk  (we could say it has its own SQL based filesystem)... therefore it is still not better for graphically intensive applications (I don't see a Photoshop or 3dStudio killer in the near future) ...  but could be a nice idea for desktop like stuff (for example a disconnected mail reader, or perhaps even a disconnected wiki). But wait... we already have disconnected mail readers...  (well, but they are not multiplatform.... mmmm... wait, Thunderbird IS multiplatform... and of course we have Java to create those multiplaform mail readers if we need to do so)... okay, but we can create a multiplatform Office like system (yes, a revolutionary idea... wait... what about OpenOffice?) and of course building an Office in a technology like JavaScript will make it really fast in standard hardware (like the very successful Java Office built by Corel a few years ago... wait... never heard of it? mmm, maybe it wasn't that successful... I wonder if that was because Java was really slow on hardware back then... )

Of course... none of that is going to stop Google Gears... people are just hypnotized with building stuff in the "web way" (even if can be done easier on the Desktop)... the way I see it.. with all this stuff, as the "thin client" of the WebBrowser becomes a "rich client" it is also gaining weight, becoming fat, becoming a fat client... so... by this logic... adding a plugin to all available browsers... it is better than a Java applet... but I can't find a logical reason for that... the new RIAs are just applications that use the browsers as the platform.... the  difference with windows applications? that there are many different browsers following the HTML/JavaScript standard, and only 1 windows (of course every browser follows the standard on its own particular way)... the difference with Java? (there isn't, but RIAs are slower... and sliced in pages... that seem to be faster to download... but in fact they consume even more bandwidth than classic fat clients with their proprietary binary protocols ), perhaps the key here is the "openness" of HTML & XML and JSON as protocols for communication (but that can also be done in Java, or in .NET & Mono)

So...  I just don't get it... what it is so great about adding a database plugin to the browser? by following this path all that we are doing is reinventing the wheel (everything that can already be done outside the browser is being re-built inside it... until RIAs become as Fat as Fat-Clients... and then someone else invents the new Thin-Client... and the story repeats again).

I guess the software industry is really, really iterative... we need to go back to an re-try stuff from the previous iteration... to realize it wasn't such a bad idea... enhance that idea... and from there, realize that the idea from 2 iterations ago, was the solution for the drawbacks of our current problems...

Monday, October 02, 2006

Are we asking too much of WebApps?

Currently in most enterprises, if an application has to be built... it will be built as a WebApp... what do I mean with "WebApp"? well you know, its an application with an HTML User Interface, (helped with a bit of JavaScript here an there). Everybody seems to think that is the best solution for all problems (no deployment, no client platform dependency, lots of programmers that "know-how" to build this kind of applications, lower security risks (no need to have a direct connection to the database from a remote client, no need to open special ports... only the well known 80 port).

But... are web apps really such a good idea? or it is just another example of "to a hammer every problem looks like a nail"?

  • You don't have to do deployment: well, that is such an advantage, no need to install, no need to update... but it has is dark side... the UI (that in most cases won't change often) has to travel with your data... and with the ever increasing need for more interactivity in the UI.. that means your really complex UI is going to travel to the client every time... 
  • No client platform dependency: Great, it can run in Windows, Linux, MacOS I don't have to worry right? ... wrong! you have to worry about browser compatibility, (will it work in firefox? will it work in explorer? will it work in safari?)
  • Lots of cheap programmers that "know how"... (forthcoming)
  • Lower security risks... (forthcoming)

Requirements Analysis: Negative Space

A while ago, I was part of a team working on a crucial project. We were confident, relying heavily on our detailed plans and clear-cut requi...