Monday, October 30, 2006

HttpUtility.UrlEncode is NOT equivalent to javascript "escape"

A few days ago, I posted about how to fix a "," problem in an very good autocomplete control.

As part of the solution I proposed to use "HttpUtility.UrlEncode" thinking that it was equivalent to javascript "escape" function, but.. it is not equivalent!

For example, HttpUtility.UrlEncode encodes the " " (whitespace) to "+", while javascript encode escapes the " " to "%20".

The same things goes for encodeURI in case you were wondering (and please don't even bother trying with HttpUtility.HtmlEncode, its purpose is completely different)

The answer? good old JScript.NET (in the server side):

Microsoft.JScript.GlobalObject.encode(string)

The documentation reads "This method supports the .NET Framework infrastructure and is not intended to be used directly from your code." but, since there is no other option, and I NOT going to implement this by myself, i think this is the best option available.

Do you know any other way to do it? would you please share it with me?

Friday, October 27, 2006

Objecthood!

According to the wikipedia, objecthood is: "is the state of being an object." I particularly liked that article, specially the part that says that:

"Theories of objecthood address two problems: the change problem and the problem of substance."

The answer to change problem is the answer to the following question, if object manifest themselves as cluster of properties... if you remove all the properties... what remains? According to substance theory, the answer is a substance (that which stands under the change)... if we apply this to Object Relational mapping... then.. the objectId (primary key) is the substance? Could this be used as a philosophical justification for the immutability of the primary key?

Then the text in the wikipedia continues: "Because substances are only experienced through their properties, a substance itself is never directly experienced." Is that a justification for never using the id (keeping it private)? Of course, there is another theory, the theory that since substance can not be experimented... then it doesn't exist, that is the "bundle theory".

In bundle theory all objects are merely a cluster of their properties and therefore... everything changes always... and no such thing as object ids/ primary keys really exist in the real world. This has an impact in relationships too, because we usually define a relationship as something that binds together 2 or more objects (at their substance level?)... but since object's substance doesn't actually exist then there isn't such thing as a "relationship" between two objects..

According to this relationships can only exist as something defined at the "property cluster level".. two (or more) objects aren't really related at the substance level... they simply "share a property" (is that why in SQL there isn't a way to make queries taking advantage of the integrity relationships? is "shared property" a better name for integrity relationships?)

And the thing gets worse: "Whether objects are just collections of properties or separate from those properties appears to be a strict dichotomy. That is, it seems that objects must be either collections of properties or something else." Is that the philosophical foundation for the "object/relational impedance mismatch"?

It seems that the conflict between a relational (bundle theory) and object (substance) worldview is there since way before a relational database or an object oriented language was invented...

I wonder what else could I learn from reading about this philosophical theories... (could there be a third one? is there a software programming paradigm for it?)

Okey, things I miss about NHibernate

Lately I have been working without a formal object relational mapping (you know, using custom "hand made" business objects, and fetching data from the database using stored procedures)

Here are the things I miss from NHibernate:

  • Automatic caching of objects by primary key
  • Easy support for primary keys with multiple fields
  • Polymorphic Queries (and all the benefits of persistent inheritance)
  • Queries with "." navigation "object.Relationship.Relationship.Field = :parameter"
  • Prefecthing relationships: "load Products with their to one relationship with Vendor preloaded"
  • Easy pagination (instead of trying to figure out how to do it in combination with stored procedures for a particular version of a particular database (that can be a real nightmare))
  • Interactive single and multicolumn sorting (try to do it with stored procedures, and of course without cheating by using the execute command, remember if you use execute all the so called "security benefits" of stored procedures vs dynamic sql dissapear)
  • Concurrency support (automatical increment of a "version" field when saving changes, even if those changes are not just field changes, but relationship changes, custom exception type for concurrency errors, optimistic locking)
  • Query by example (no need to concatenate strings by hand based on whether a field is null, no need to worry about upper case or lower case comparision when comparing each field, no need to update a query definition if a column is added or dropped from a table)

And, if somehow you have to work with a database that doesn't have support for stored procedures (and you can't use NHibernate)... things get even more "interesting":

  • Alias for ugly table names and ugly field names (instead of "select * from PDT where PCKT = 1" you write "select * from Products where PackageType = 1"
  • Named Parameters: Instead of "select * from PDT where PCKT = ? and RQTY = ?" you can write: "select * from Products where PackageType= :packageType and RemainingQuantity = :remainingQuantity" this is specially a nightmare when one has to use OleDb, and it gets worse with Inserts or Updates: "INSERT INTO PDT (NN,PCKT,RQTY,DRP,PRNBR) VALUES (?,?,?,?,?)" instead of the simple: "session.save(product)"

I will update this list as it increases...

Friday, October 20, 2006

Fixed "," problem with SmartAutoComplete

Going back to the "," problem the solution seems to be modifing this
in the SmartAutoCompleteExtender.cs  (remember to add a reference to the Microsoft.JScript.dll to use GlobalObject.escape instead of HttpUtility.UrlEncode)

public string GetCallbackResult()
{
if (_completionItems == null)
return string.Empty;
for (int i = 0; i < _completionItems.Length; i++)
{
_completionItems[i] = (GlobalObject.escape(_completionItems[i]));

}
string join = string.Join(",", _completionItems);
return join;
}

And this in SmartAutoCompleteExtenderBehavior.js

this._onCallbackComplete = function(result, context)
{
var results = result.split(',');
var decodedResults = new Array();
for (var i=0; i<results.length; i++) {
decodedResults.push(unescape(results.getItem(i)));
}
if(_enableCache)
{
if(!_cache)
_cache = { };
_cache[context] = decodedResults;
}
_autoCompleteBehavior._update(context, decodedResults, false);
}

The trick is to "escape" things in C#... and "unescape" them in JavaScript (I hope this helps someone else with this issue, it works fine for me)

Tuesday, October 17, 2006

Microsoft Expressions Web Designer

Downloaded it...

Looks great...

Great CSS support...

No support for third party web controls...

No support for web user controls (.ascx)...

No visual support for skins...

Closed it...

Wait and hope that the released version is not as heavily limited as the Beta1... right now is pretty useless for any serious ASP.NET development

Monday, October 02, 2006

Are we asking too much of WebApps?

Currently in most enterprises, if an application has to be built... it will be built as a WebApp... what do I mean with "WebApp"? well you know, its an application with an HTML User Interface, (helped with a bit of JavaScript here an there). Everybody seems to think that is the best solution for all problems (no deployment, no client platform dependency, lots of programmers that "know-how" to build this kind of applications, lower security risks (no need to have a direct connection to the database from a remote client, no need to open special ports... only the well known 80 port).

But... are web apps really such a good idea? or it is just another example of "to a hammer every problem looks like a nail"?

  • You don't have to do deployment: well, that is such an advantage, no need to install, no need to update... but it has is dark side... the UI (that in most cases won't change often) has to travel with your data... and with the ever increasing need for more interactivity in the UI.. that means your really complex UI is going to travel to the client every time... 
  • No client platform dependency: Great, it can run in Windows, Linux, MacOS I don't have to worry right? ... wrong! you have to worry about browser compatibility, (will it work in firefox? will it work in explorer? will it work in safari?)
  • Lots of cheap programmers that "know how"... (forthcoming)
  • Lower security risks... (forthcoming)

Monday, September 18, 2006

ASP.NET 2.0 Security... Damages Layer Separation

I have been reading about "The Demise of the Security Application Block" and I quote:


"Specifically, the factories, interfaces and providers for authentication, roles and profile have been removed. Equivalent functionality is provided by the new System.Web.Security.Membership class and System.Web.Profile namespace."



I think that is a really big mistake architecturally speaking... why? well because System.Web.Security.Membership class and System.Web.Profile are for
WEB applications, not for WindowsForms.NET applications, and authentication and authorization should be services independent of the presentation
mechanism, the Security Application Block should wrap the System.Web.Security.Membership class and System.Web.Profile namespace functionalities
and allow the developer to work in a presentation independent way (or Microsoft should change System.Web.Security.Membership class and System.Web.Profile and make them
System.Security.Membership class and System.Profile
)


I think the easier thing to do is to bring back the the factories, interfaces and providers for authentication, roles and profile of the Security Application Block that provided this services without violating layer separation (following the good example of JASS).


And perhaps, in the future the "revived" Security Application Block should be merged with .NET code... or a new presentation independent API named ".NET Authentication and Authorization Service" (NASS?) should be created...


What do you think?

Layer Separation Violation: Web Configuration Manager... one of ASP.NET 2.0 mistakes?

The management of configuration in .NET applications was improved in .NET 2.0, now it is easier to have custom configuration sections, and we even have pre-built configuration section for Connection Strings (very useful for ADO.NET applications)

But... there are some design decisions I just don't get... "Configuration" is a separate concern from presentation (configuration can be used to set UI behavior, but in my opinion, configuration is in a lower layer), but now we have the WebConfigurationManager for web applications and the ConfigurationManager for client applications ...

From MSDN website:

Using WebConfigurationManager is the preferred way to work with configuration files related to Web applications. For client applications, use the ConfigurationManager class.

First, I thought... "this is going to be a problem", now I won't be able to read configuration in the lower layers (business facade, business rules, data access) without having to use my own proprietary "trick" to hide the fact that for those layers it is irrelevant whether the application has a WebUI or a WinUI... how could the .Net team make such a bad decision?

But then I realized... that i can use the ConfigurationManager from an WebUI (ASP.NET Application) without any problem... so my lower layers don't suffer from contamination from the presentation mechanism...

But now i am wondering... If the ConfigurationManager works perfectly... and doesn't tie my code to a particular presentation code... why should I use the WebConfigurationManager.... in fact... why should anyone use the WebConfigurationManager ? I just haven't figured out that...

Saturday, September 09, 2006

The Tao of the Software Architect

Today I read about the Software Architect role in the SEI website, and I found a lot of very useful information:

And, the essay that I liked the most was "The Tao of the Software Architect" (That can also be found in IBM website)

From the "The Tao for the software architect" I specially liked this part:

When the process is lost, there is good practice.
When good practice is lost, there are rules.
When rules are lost, there is ritual.
Ritual is the beginning of chaos.

I have seen places where ritual has taken control of everything, and people there believe they are doing things right, because they are following the ritual, but they don't realize that they are in chaos,  because they no longer understand why are they doing what they do... and that is really bad, because they don't look for ways to get out of the chaos, because they think that what they do is fine, because they are following the ritual (even if the ritual  doesn't achieve anything valuable for the organization)

Another part i also liked a lot is this:

If you want to be a great leader,
stop trying to control.
Let go of fixed plans and concepts and
the team will govern itself.
The more prohibitions you have,
the less disciplined the team will be.
The more you coerce,
the less secure the team will be.
The more external help you call,
the less self-reliant the team will be.

I believe I have that problem, sometimes I become a "control freak" and want everything to be built the way I say it, in the time I say it, exactly as I say it... but, as I gain experience I am starting to realize that is not the way to build a system, to be a good architect I have to help set a process that will be used to build the system, and make sure that the process is followed and that the processes evolves with the team and the system being built and at the same time get out of the way and let the team flourish...

As with most of this Zen-like things.. the truth is somewhere in the middle:

Too little control and you will have chaos, too much  control...specially just for the sake of following the ritual... and you will negatively affect the team motivation, creativity... and productivity.

I guess I need to find the Middle Path for Software Architecture...

Thursday, September 07, 2006

Administering an ASP.NET WebSite

Hi! One of my friends showed me how to access the ASP.NET Administration WebSite without calling it from within VS.NET:
  • Create a virtual directory ASP.NETWebAdminFiles in IIS that point to C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\ASP.NETWebAdminFiles
  • Open properties windows of the new virtual directory, make sure that it is configured to run with ASP.NET 2.0, and in Security tab, uncheck Anonymous Access, check Integrated Windows Authentication. After that, you will be able to connect to WebAdminTools using the following syntax http://localhost/ASP.NETWebAdminFiles/default.aspx?applicationPhysicalPath=XXX&applicationUrl=/YYY in my case, it is: http://localhost/ASP.NETWebAdminFiles/default.aspx?applicationPhysicalPath=D:\Tasks\Libranyon\Photonyon\&applicationUrl=/Photonyon
Although I don't recommend to do it, if you want to access WebAdminTool from other computer, open WebAdminPage.cs from (C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\ASP.NETWebAdminFiles\App_Code) and comment the following code block

if (!application.Context.Request.IsLocal) {

SecurityException securityException = new SecurityException ( (string) HttpContext.GetGlobalResourceObject( "GlobalResources", "WebAdmin_ConfigurationIsLocalOnly" )); WebAdminPage.SetCurrentException(application.Context, securityException); application.Server.Transfer("~/error.aspx");

}

WebAdminTool still be protected by Integrated Windows Authentication, so you still some have some defense here. I have found this same instructions here, so I guess my friend got them from there.

Friday, September 01, 2006

Refactoring Bug in VS.NET 2005

Every time you modify the name of an element in a webform, it triggers refactoring
and checks all the project... and that is just plain not necessary... it is a
really really annoying bug! Perhaps they will fix it in a future release.... I just
hope I could turn off all VS.NET refactorings... (and let Reshaper do the job)

Thursday, August 31, 2006

Extreme Normal Form

Today I read about "Extreme Normal Form"... and well I really liked it:

Your classes are small and your methods are small;
you've said everything
OnceAndOnlyOnce and you've removed the last piece of unnecessary code.
Somewhere far away an Aikido master stands in a quiet room.
He is centered.. aware.. ready for anything.
Outside a sprinter shakes out her legs and settles into the block, waiting for the crack of the gun while a cool wind arches across the track.
Softly, a bell sounds.. all your tests have passed.
Your code is in
ExtremeNormalForm... tuned to today and poised to strike at tomorrow.

I also liked a lot OnceAndOnlyOnce balances YouArentGonnaNeedIt. I have never liked the idea of applying YouArentGonnaNeedIt by itself, but now I believe that by using OnceAndOnlyOnce a good balance can be achieved. Read it here

Monday, May 22, 2006

OMG: Object Orientation and Stored Procedures

OMG! how many times have I read the argument "using stored procedures is object oriented programming" just think of the database as an object, and think that each stored procedure is an method of the database....
Now... I am not going to say that using stored procedures is wrong... and I am not going to say that the only correct way of solving software problems is using object orientation... ( I do prefer to use object orientation, I a do prefer to use an object relational mapper instead of a stored procedures, specially for OLTP applications, I think application built that way are more maintainable, more portable between database, can be built faster, scale better, etc,etc)

But for batch processing, stored procedures have a clear advantage, and in my experience many business problems are solved by using an OLTP application, and a short but very important list of batch process... (of course the problem here is that if you built you application using an ORM, probably you have a lot of you business logic already implemented in a presentation-independent form, and, depending on the size of you database, you could program you batch process using your business objects... that has its disadvantages (slower performance that stored procedures) and its advantages (reutilization)... I believe reutilization and maintenance are more important that performance... specially because it is possible to worry about performance in the wrong way: for example Premature Optimization )

But the main point of this post, is that, first, I didn't think that using stored procedures was object oriented programming... it was "procedural programming"... or perhaps even "relational programming" but certainly not object oriented programming (where is the inheritance? where is the polymorphism?) but... after giving it some more thought... I realized that while it could not be "object oriented" it could be "object based"... stored procedures can encapsulate your tables (one of the major advantages of stored procedures) and you can see them as methods to a your "database" object...

So... now... your (OLTP?) system is a thin presentation layer on top of the "database object"... now... is there something in favor... or against that in object oriented design theory?.... mmm.... it achieves Presentation/Business Logic separation... and that is good... but I still feel there is something that doesn't feel right...

OMG! Well, it turns out there is! : Now the database is a God Object.
From Wikipedia: A God object is an object that knows too much or does too much. The God object is an example of an anti-pattern, it goes against divide & conquer ... using a God object is analogue of failing to use subroutines in procedural programming languages, because so much code references it, it makes maintenance difficult...

So... the next time someone tells you that using stored procedures is good object oriented programming... you could answer him or her: Oh My God (Object)! (You could think of it as object based, but, form an object oriented point if view.. it is a recommended solution? couldn't it be seen as an anti-pattern? I am not saying it is wrong, I am just saying that from an object oriented point of view, it might not be the best way to solve the problem... of course, using object orientation might not be the right way to solve this particular problem, after all, object orientation is not a Silver Bullet)

Tuesday, May 16, 2006

Installing Crystal Reports for VS.NET 2005

After wasting several hours, it turns out that, before installing a WindowsForms for .NET 2.0 application that requires CrystalReports, one needs to install CRRedist2005_x86.msi from C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\BootStrapper\Packages\CrystalReports it should be easier to find...

In that folder there is also a product.xml that could be for bootstrapping the CRRedist2005_x86.msi with you own .msi so that you can install everything in one step (well, I think it is for that) I have to do more research...

Monday, May 15, 2006

Frederik Gheysels' weblog

Frederik Gheysels' weblog

It was almost as if I heard myself complaining about the bad design most RAD tools promote... what I wonder is.. why couldn't RAD tools promote doing it in "the right way" with proper MVC separation and everything....

It is like the new Java Studio Creator for Sun, the first version copied the DataSet idea from VS.NET and copied it IMHO in the worst possible may (by assuming incorrectly that .NET databinding only worked for the DataSet and not for any .NET object... it is true that it seems like that on the first look, but if you analyze the API closer, you can see it is biased to make it easier for the DataSet, but it does allows for pure object databinding... and what do the Sun guys do? they make it only work for tables! without object support! (they tried to fix it for Java Studio Creator 2, but the object-view has a bug that makes it unusable... I just cant believe it, how could they make such a mistake having really good ORM tools like Hibernate, EOF, Castor o Cayenne, I was expecting full object databinding from them... but, until now, there is nothing close to .NET databinding in the Java world)

Sunday, May 14, 2006

Esteban's Blog (A Delphi Programming dedicated blog): Ad Hoc SQL vs Stored Procedures

Esteban's Blog (A Delphi Programming dedicated blog): Ad Hoc SQL vs Stored Procedures

I am happy I found this blog entry, I have been looking for a disscusion about Ad Hoc SQL vs Stored Procedures it is important to know both sides of the discussion, so that the next time it is necesary to choose, one can provide the best possible solution.

Saturday, May 13, 2006

Why can't Hibernate just load objects on demand?

Today I was reading: Why can't Hibernate just load objects on demand?
I find some of its affirmations interesting (I realize this is kind of a sensitive topic, but I just don't feel that the information there is convincing enough, specially if you have used other ORM that do have distributed non-transactional lazy loading:


If Hibernate would, hidden from the developer and outside of any transaction demarcation, start random database connections and transactions, why have transaction demarcation at all?



That really is a very good question, what exactly it is so good about having transaction demarcation? (wouldn't it be better to have "automatic transactional demarcation"?) like this:


Customer newCustomer = new Customer(EcoSpace);
newCustomer.Name = "Air Software Ltd";
newCustomer.Telephone = "+44 (0)121 243 6689";

TelephoneNumber alternateTelephoneNumber = new TelephoneNumber(EcoSpace);
alternateTelephoneNumber.Name := "Home number";
alternateTelephoneNumber.Number = "+44 (0)121 999 9999";
newCustomer.ContactDetails.Add(alternateTelephoneNumber)

EcoSpace.UpdateDatabase();

Wouldn't it be better, if, as we have the Session, and the transactions...we could have an equivalent to ECO's EcoSpace?

(Important Note: I am NOT saying ECO is better than NHibernate, in fact I had to choose between the two for a project, and IMO NHibernate was a better choice, because its mapping capabilities for legacy databases are far superior, and its lower level session/transaction base API gave me more control, and HQL is IMO more powerful than OCL... but not all projects need that level of control, and... specially for new projects that do not need to interoperate with legacy apps, or that will have full control of the database ECO could be a much better alternative because of its easier to use API with automatic transactional demarcation and non-transactional lazy loading)


What happens when Hibernate opens a new database connection to load a collection, but the owning entity has been deleted meanwhile?


I guess in that case it should throw an exception... but I don't see this a justification for transaction demarcation... IMHO the same problem can happen with regular "manual session opening an closing"

(Note that this problem does not appear with the two-transaction strategy as described above [in the question Can I use two transactions in one Session?] - the single Session provides repeatable reads for entities.)


Yes, but that also means I need to have a session and transaction open during user think time...

Why even have a service layer when every object can be retrieved by simply navigating to it?

That is really a problematic question... in fact... the question is my point exactly... why should I bother having a service layer if every object can be retrieved by simply navigating to it? IMHO not having to build a service layer sound like a good thing... doesn't it?


All of this leads to no solution, because Hibernate is a service for online transaction processing (and certain kinds of batch operations) and not a "streaming objects from some persistent data store in undefined units of work"-service. Also, in addition to the n+1 selects problem, do we really need an n+1 transaction and connection problem?

So... that means... that if we want distributed non-transactional lazy loading we should build a streaming objects from some persistent data store in undefined units of work service? is that what other ORMs (like Borland ECO or Apple's EOF are? who draws the line? I know that the owners of Hibernate are free to choose, but is this decision theoretically supported? I'd like to know how...)


The solution for this issue is of course proper unit of work demarcation and design, supported by possibly an interception technique as shown in the pattern here, and/or the correct fetch technique so that all required information for a particular unit of work can be retrieved with minimum impact, best performance, and scalability.

Yes, but OpenSessionInView is only a good idea if you are building a web application.. and if you want only one transaction per request/response cycle... if you UI deviates from that... then it is no longer such a good idea...


Note: Some people still believe that this pattern creates a dependency between the presentation layer and Hibernate. It does not...


Well, then, why OpenSessionInView isn't a good idea for interactive UIs in smartclients? isn't changing form ASP.NET to WindowsForms a change on the presentation layer? so.. If i built my application to share its business logic between a WindowsForms and an ASP.NET presentation and I am going to have problems... then that means I am tied to a particular presentation layer... isn't it?


Object Relational Mapper vs Business Entity Manager

In my opinion the best two are Hibernate and EOF, I think a mix between them could create "the ultimate" Object Relational Mapper... or the ultimate Business Entity Manager...?

EOF (the best BEM?)

Hibernate (the best ORM?)

An interesting topic for analysis could be the limits between an Object Relational Mapper and an Business Entity Manager. Some people think that Hibernate shouldn't evolve to become more like EOF, because it should aim to do one and only one thing (be the best ORM) and other people think that it would be nice to have the services that an Business Entity Manager gives you:
  • automatic connection handling
  • automatic transactions handling
  • multiple undo levels
  • real nested transactions
  • client and server objects
  • distributed non-transactional lazy loading
  • etc,etc

There are also some people that think that a Business Entity Manager is more closer to the MDA "objective", so I think it is important to take a look at (Using Borland's ECO to develop model-powered applications for .NET) an many related articles, the specially interesting thing for me about ECO is its OCL support (that can be used, for example to configure constraints that help ensuring that only consistent data is saved into the database... it is kind of a Design by Contract for the domain model)

IMO the Active Record pattern is getting too much popularity (perhaps thanks to Ruby , but I think it leads to an anemic domain model ) and Domain Model based in Business Entity Managers is not as popular (thanks to the problems in EJB1 and EJB2) but it is important to remember that EJB is not the only way to build an Business Entity Manager, there are ways to build Business Entity Managers that are a lot more lightweight and user friendly than EJB, (just investigate more about EOF and ECO)

Monday, May 08, 2006

NG-UIPAB V3 Alpha 3... UnitTesting!

Hi!
This releases has several minor bug fixes, the majority of them because I have upgraded the UnitTests for UIPAB 2.0 and that allowed me to test all the changes the UIP has suffered in my hands. This will make it easier to add new features without the fear of finding I broke something important.

The majority of the bugs fixed were related to the difference between the HybridDictionary and the new generic Dictionary<K,V> with an HybridDictionary it as right to ask for an unknown key:

if(hybridDictionaryInstance["SomeUnknownKey"]==null) {

//The SomeUnknonwKey is not in the HybridDictionary

}

But in .NET 2.0, that code throws a "Key not found" exception and I have to:

if(!genericDictionaryInstance.ContainsKey("SomeUnknownKey")) {

//The SomeUnknownKey is not in the Generic Dictionary

}

The other problem was WindowsFormViewManager.GetCurrentViews(taskId) I am sure that my choice for return type was the correct one (IDictionary<string, IList<WindowsFormView>>) but it seems to be consistent with my interpretation of the internal workings of the UIPAB

So, in the test, where it said:

Hashtable views = WindowsFormViewManager.GetCurrentViews(taskId);

Now says:

IDictionary<string, IList<WindowsFormView>> views;
views = WindowsFormViewManager.GetCurrentViews(taskId);

And, where it said:

views["name"];

it now says:

view["name"][0]

But I am not that convinced about the usability of this structure... perhaps I should simplify it in the next release. (The previous code IMO seems to handle this in an inconsistent way... some code assumes that the view is a dictionary strings and WindowsFormViews, and other code assumes that the key is a string, but the value is an IList... I guess i need to analyze this more thoughtfully



Sunday, May 07, 2006

UI Design: The principle of grammar

While looking for information to "backup my personal research" on UI Design (and its relation with the underlying object graph an persistence mechanisms) I found this page "A Summary of User Interface Design Principles". And I got particularly interested in the 8th principle The principle of grammar: A user interface is a kind of language -- know what the rules are.

That principle says that the two most common grammars are known as "Action -> Object" and "Object -> Action". In Action->Object, the operation (or tool) is selected first (Similar to Edit then List?) and in "Object -> Action" the object is selected first and persists from one operation to the next (Similar to List then Edit?).

On the other hand this two "most common grammars" are IMHO not exactly "data manipulation oriented (DMO)", because the principle states that in "Action -> Object": When a subsequent object is chosen, the tool immediately operates upon the object. The selection of the tool persists from one operation to the next, so that many objects can be operated on one by one without having to re-select the tool. And in a typical DMO you do not choose the "edit" tool and then apply it to an list of different objects... so this grammar seems to apply more to a typical "tools for editing documents" kind of UI. But Object -> Actions matches better with List (You have to select an object) then Edit (The action).... I will have to look more in to this...

Another principle that is related to my doubts in UI Design is the 10th (and it is specially related to UI Design: When do you save?) is "The principle of safety: Let the user develop confidence by providing a safety net" according to this principle, the UI should provide a safety net for new users, but provide the option to "work in unsafe mode" for the advanced users. I think that the most important feature a program should provide is a "a safe cancel button that really cancels" and an "undo mechanisms" (because, as much as we would like the user to pay attention to our message box, the truth is that both novice and advanced users dismiss them without to much care, and after that, only "Undo" can save them)

The principle of context -- Limit user activity to one well-defined context unless there's a good reason not to (11th) IMO is more related to the modality of an UI that the 8th... perhaps because in general I tend to use modal windows when I want the user focused in a particular context, and I don't want him to "click around" and do possible conflicting actions (like trying to delete the record he is currently editing)

I also liked the 14th principle: "The principle of humility -- Listen to what ordinary people have to say" it reminded my that I have to pay attention to the user (A single designer's intuition about what is good and bad in an application is insufficient. Program creators are a small, and not terribly representative, subset of the general computing population.), but that the user is not the owner of all the answers: "One must be true to one's vision. A product built entirely from customer feedback is doomed to mediocrity, because what users want most are the features that they cannot anticipate."

The only thing I didn't liked about this principles were the examples, they were all about the typical "document editing UIs" like Word processors, Painting programs, etc, and not about data manipulating applications(applications built on top of CRUD and transaction principles) Mmmmm, perhaps I should write my own examples and post them here... If you have an example, please, share it with me...

Friday, May 05, 2006

UI Design: When do you save?

Hi!

How do developers save?? I know I know, after reading some of the comments on UI Design: Edit then List vs List then Edit I learned that most developers believe that the UI Design is a user concern, not a developer concern (build the UI as the User wants it, not as the developer wants it)... I have been wondering...

  • Do the users really know what they want?
  • Do the users really appreciate some of the internal behaviors the UI provides?
  • Do all software projects have enough budget and type to make real usability test?
  • Do all customers pay an amount of money for a project that justifies the extra effort of building a well designed UI?
  • Do you as a developer have a framework that make all this issues moot and for you is so extremely easy to build a good UI that you always built it in the absolute best way?

Lets take, for example the "Save" or "Cancel" button in a typical CRUD application, and lets say the UI is build following List then Edit:

  • The user search all customers named "John"
  • The user selects the first one from the list
  • The user clicks the edit button on the tool-bar (in the List window)
  • The Edit window appears, the user modifies everything (from some fields, to complex to many, to one, many to many and many to many relationships) and then...
    • The user clicks "save" because he is happy with his changes
    • The user clicks "cancel" because he is not happy with his changes.

Now... a well behaved system, shouldn't write anything into the database until the "save" button is clicked (regardless of the complexity of the "edit customers" UI) and a well designed persistence API should make it as easy as "editingContext.Rollback()" to achieve that effect... and, if we weren't editing an customer, and from a performance perspective, if as a result of the editing of customer we created new "detail" objects(or rows) and we decided to cancel the operation, the server shouldn't even be aware that we created those objects, and then "roll-backed" its existence.

But the questions now is:

  • Do we all always check that nothing is written on the database until we click "save"? and "Does the user care?"

I know I always try to check that if and only if the save button is clicked I save... but I have met lots of other developers that do not care... if for example, they have to add new addresses to the customer they write the new addresses into the database the moment they create them (from inside the edit customer ui!) and after the "cancel" button is clicked, this to many relationship is not rollbacked... they just don't care...

Now... I have to admit that I have done this sometimes (when in a extreme hurry to deliver a new form) , and "i feel" like it is enough if I add a message box warning (of those the user never reads) saying "if you add a new address all your changes will be committed to the database" ... the user, as always, just dismisses the warning and proceeds to add addresses... and perhaps click cancel later... nevertheless as soon as I have time, I fix the form so that it only saves when the save button is clicked... (and sometimes, with complex nested UIs that is a pretty hard job)

But... the thing is that I have seen lots of systems built with the idea "if you write it on the screen, and you do as much as click any button your changes will be saved" I have seen it specially in web based systems, but also in smartclients... and I have had the opportunity to hear all kinds of complains of this systems, and I have even replaced some of them with "only if you click save you save" systems... but I have never heard a single user saying "it is nice that the cancel button really works now" or "i hated the previous system because I clicked cancel and it saved my changes" but... strangely I have heard some of them complaining "I added 50 addresses to a customer, and then by mistake i clicked cancel, and then by mistake again I answered "yes" to the question "Are you sure you want to cancel your changes" and my changes were lost!!!!

But that has been my experience... maybe you do know a user that appreciates a consistent behavior in the "save" and "cancel" buttons...

How do you decide the transactional UI boundaries of your system? Do you really ask the user "do you want the cancel button" to really work? Do you always have the time to build 2 o 3 different UI prototypes and really test usability? Do your users really appreciate the extra 2 o 3 days that may take you to build a transactionally consistent UI? or the they prefer an UI that is built faster and if they made any mistakes, they "edit again" and rollback the mistakes manually (and they are happy with that behavior)? have you ever received a complain from a user saying "i want a cancel button that really works"? Do you implement cancel by calling an API similar to "editingContext.Rollback()" or do you cancel in "a custom way" (by explicitly deleting changes already made to the database) ? do you offer real "save" and "cancel" buttons even in your web based applications?

Monday, May 01, 2006

Security Contexts in WindowsForms and In ASP.NET

In Windows forms, if you want to set the current user, all you have to write is:

Thread.CurrentPrincipal = new CustomPrincipal(new CustomIdentity("login", "password"));

But in ASP.NET you can not ask the current thread for the user, because you will get the same thread on every request... so.. where should you store this information? It seems that the place is in the HttpContext.User Property as described on devx

I think I should look more into this...

NGUIP ObjectBuilder Integration

I have added support for the ObjectBuilder in 2 places:

  • Inside the GenericFactory (so that everything that was created by it can now enjoy the benefits of creation by the ObjectBuilder)
  • Inside the "FactoriesFactory" so, now the factories that create the controllers, the viewmanager, the state, etc, are singleton (thanks to SingletonPolicy)

The FactoriesFactory acts as a "central" registry for other factories (or perhaps services in my next release) and it is a property of the Navigators base class (that way it is available everywhere in the internal API) but currently the contents of this registry are fixed, I am not sure where should I add code to customize the initialization...

You can "get" a factory with the following code:

return FactoriesFactory.Get<IViewManagerFactory>();

Notice that we ask for an interface (the Factories are registered in the object builder using an ITypeMappingPolicy to achieve that effect).

I was thinking about adding and event or virtual method it in the UIPManager to allow for customization of the factories list... but i forgot that in webmode the Navigators are created at the WebFormView... and I would really like to have an "unified" place to do this... any suggestions?

User Interface Process Application Block V3 Alpha2 (Unofficial)

I am almost ready to release Alpha 2 of the unofficial User Interface Process Application Block, I made several changes, here is a list with some of them:

  • No more non-generic ArrayLists (they have all been converted to Lists with generics)
  • No more HybridDictionary (they have all been converted to Dictionaries with generics)
  • the UIPManager is now a singleton (first step in to a plug-in architecture)
  • The StateCache is now a singleton an implements a new interface IStateCache (first step in to a plug-in architecture)
  • The factories are also coordinated by a singleton FactoriesFactory that now implements the interface IFactoriesFactory
  • Instead of calling the FactoriesFactory singleton everywhere, now it is a parameter of the Navigator base class (first step in to a plug-in architecture)
  • Factories now receive as a parameter in their constructor the IGenericFactory they will use internally to create objects.

I am not sure if I will release it as it is right now... or wait until I add support for the ObjectBuilder, I am also deciding if I should call the ObjectBuilder from inside of the GenericFactory, or if a I should create a "ServiceCollection" to manage all the factories, similar to the one available in the CompositeUI

Saturday, April 29, 2006

UI Design: Edit then List vs List then Edit

Today I was thinking about the way I build UIs (List then Edit) and the way most people I know build UIs (Edit then List) and the relation that the way you build you UI has with the way you store the data you manipulate in you UI.

List then Edit

In this "pattern" for UI structure, after the user selects an option in the menu he sees a window that allows searching with some controls (generally in the top of the window),and with some control in the bottom that represents the search results, then the user selects one of the search results (an object, or row if you prefer) and he proceeds to work (edit, modify) it, for that he opens a new window "the editor" (perhaps by double clicking the selected object, perhaps by clicking an "edit" button in the list window) in the editor he may modify the object being edited as much as he likes, and then click "save" if he wants to commit his changes or "cancel" if he wants to rollback them.
If the user wants to create a new object, the list has a "new" button that also calls the "editor" so that the user can set the initial values for the fields before the object is committed to the database for the first time

Edit then List

In this other "pattern" for UI structure, after we select an option in the menu we are right there in the editor, but all the controls that could allow us to modify the current object (or row) are disabled, because we haven't searched for any (or created a new one) from here the user can choose to create a new object that action has the effect of enabling all the control in the editor, or search for an already persisted object (or row) by clicking the "search button", that shows the search "list" window, in the top of that window there are controls to configure the search criteria, and in the top a control to represent the search results, from here the user can choose one of the search results (an object or row), and click on the "accept" button, which takes him back to the editor that now is displaying the previously selected object, here the use may choose to make some changes and click "save", or just cancel click "cancel" but actions have the effect of disabling the controls in the editor UI until the buttons "search" or "new" are used again.


Edit in List
When an application generally follows List the Edit, sometimes, if the info in the selected object is very simple, the user may modify it right there, but for for objects with medium to complex (several fields, to one or to many relations with other objects) calling a window to modify the object is more comfortable for the user

Search (List) in Editor
Some times, the developer likes to use the same UI to Edit and to Search, this works like some kind of "query by example", that is when user enters the editor, instead of being disabled an waiting for a click on the "new" or "search" button, the controls are enabled, and if the user writes some data in to them and clicks "search" the already persisted object (or row) that is more similar to the partial information already written in the editor is loaded. Sometimes there are many objects that match the query by example, here is when a navigator control can be a nice thing to have.

Master List, Detail Editor
This is also a very common configuration, both the list and the editor are in the same window, the list generally in the top, and the editor in the bottom, I think this somewhat corresponds with Two Panel Selector.

Persistence
Okey, until now I have exposed what I think are the some of the common "patterns" used to create UIs that manipulate data, now... what I find more interesting about this is the the way this "patterns" affect the way the data is stored or retrieved from persistence:

  • If we use Edit then List:
    • The Edit is displayed on the screen
    • The user clicks "search" and the search List window appears
    • The user configures the search criteria in the List window
    • The objects (or row) matching the criteria are shown in the List window
    • The user selects one of the matching objects and click in the "accept button" in the List window
    • We return to the Edit window, that now is displaying the selected object.
    • We can make changes to the current object by clicking "save" or click "cancel" to dismiss the changes, if we do either action the Edit controls are disabled and we again have to click "search" or "new" to work with an object
    • The problem here, is that the user might want to see the same search results he used the last time, but each time we call the search List window, we are creating a new object, and all of the configuration from the last time we called it is already lost (this disadvantage has a "nice side", because we don't have to worry about showing "stale data" to our user)
    • Another disadvantage is that maybe not all the controls we use to manipulate the data in the edit windows are "databindable" so we have to manually reset the state of the Edit window, and if we don't do it correctly we could have bugs that "transfer information between edits". (First I edit John and set his age to 24, then I edit Mary, and her age is also shown as 24, but her age is 56, and we had no intention of changing it, it happened because we forgot to clear the text-box value)
  • If we use List and then Edit:
    • The list is displayed on the screen
    • The user configures the search criteria in the List window
    • The objects (or row) matching the criteria are shown in the List window
    • The user selects one of the matching objects and click in the "edit button" in the List window.
    • The "Edit" window appears on top of the List window, and is displaying the selected object.
    • We can make changes to the current object by clicking "save" or click "cancel" to dismiss the changes, if we do either action the Edit window is closed and we return to the List window
    • One of the advantages of this approach is that we don't have to worry about the "transfer between edits" bug, because each time we call the Edit window, it is a new window, without any data, ready to configure itself to match the data contained in the object we are going to edit.
    • Then problem of keeping the search results to reuse them is also automatically solved, because the search List window was never closed, it was there all time, behind the edit window.
    • But the problem now is that some of data shown there may not be updated, or perhaps, it has data that has never been on the database, and that we do not want on the database, how can that be? well, we edited the object in the edit window, and the databinding ensured that the changes were written to the object before we clicked "save" or "cancel", if our intention is to keep those changes and we clicked "save" in the Edit window then we don't have a problem... but if we clicked cancel, now we need to rollback in memory changes and some of this changes could have modified relationships with other objects, or properties of the other objects, we could have created, deleted or updated a complex graph of objects but "only in memory" and now, we need to rollback all this changes, if we are using an Object Relational Mapper (ORM) that supports this (like Apple's EOF) or a relational cache that allows for in memory transactions (like the .NET DataSet) we can solve this problem easily (of course the DataSet has other disadvantages), but if we are using an ORM like NHibernate that AFAIK does not rollback in memory changes then you have a problem, you have to re-fetch the information from the database.

This seems like a big omission from the NHibernate guys... but it isn't exactly so, everything I have exposed here, has been on the assumption that we are working in a "Smart-Client" that holds local information, and Hibernate was born in "the web world". In the web, you "need" to re-fetch the information on each request (so when you go from editor to list or from list to editor you always reload the data) and you don't really pass the object you are going to edit from the list to the edit, it easier, and more efficient to just pass the primary key, of course the problem there is that you can only do that with objects previously persisted in the database, if you object is new it has to be serializable and some times that just isn't advisable (but that is an issue for another discussion)...

The thing with web based application is that the controls in the UI don't actually store the information directly in your object, their store it in the view-state, or in the query-string, in cookies, in the session or in an object that "represents the form", and only when you finally want to save, you extract the information from the web-controls and write it in to your object (at least that is more/less was the way I did it in the Java servlet world) but this is a "feature" that might be going away... the problem is that with the new JSF databinding, or with the new databinding facilities of ASP.NET 2.0, you can actually bind you controls directly with you datasources... and then how will we rollback the in memory changes? should we build a framework on top of NHibernate, a kind of "in memory object context" that handles the commits and rollbacks in memory?

More Questions

  • And what about the case when a list window calls an edit window that has an embedded list control that calls another edit window? (complex multiple level or nested interfaces) should the object relational mapper make it easier for me to build this kind of UIs?
  • Or Should a new kind of framework deal with the problem of communicating the persistent objects with the UI?
  • is using DTOs really the solution? Does Apple's EOF go beyond the responsibility of an ORM by providing in memory transactions?
  • If NHibernate can almost transparently persist objects to the UI, shouldn't this other framework be able do the same and transparently present the object in to the UI without having to manually create objects to do this job?

    I am thinking about publishing this post as an article in the Wikipedia to see how it evolves... I would love to read some comments about this article to improve it.





Monday, April 24, 2006

User Interface Process Application Block V3 (Unofficial)

I have created "my proposal for V3 for the UIP" (Microsoft seems to want that we all change to the Composite UI framework... while I agree it is amazing and superior in many ways, I miss some features of the UIP, like the Navigator or ASP.NET support, my final goal would be to make UIPv3 a kind of "module" for the CompositeUI AppBlock that make it easier to migrate UIPv2 to .NET 2.0 and brings the advantages of the ObjetBuilder IoC to ASP.NET)

Okey, okey, what did I uploaded? Unoficial UIPv3 Alpha1 includes the bugfixes from hswami , my last version for the UIP.Attributes, separation of the UIP internal logic in layers (Common, Factories,UIProcess (Core) & Attributes), replacement of weakly typed Hashtable an HybridDictionary with strongly typed Generic<> Dictionary and other minor bugfixes here and there.

This is just the first step, IMHO the main problem with UIP internal architecture was that it was not layered, so changing the way objects were created in a centralized way was hard... for the next release (Alpha2 probably next weekend) I will integrate the new Factories layer with the ObjectBuilder and replace internal ArrayLists with strongly typed generics....

Friday, April 14, 2006

Dynamically Loading Assembly for MSBuild Task

For a task that has to analyze an assembly using reflection (in this case
a task to export the hbm.xml or the Sql from an assembly with Plain Old .NET Objects
using NHibernate.Attributes)
one has to be able to load and unload the assembly dynamically (without that, it is
impossible to "build" two times the solution , because the first build will "block" the
assembly with the annotated Plain Old .NET Objects)
So, first I tried with loading the assembly using the Reflection Only Context but it turned out that the code inside of the NHibernate.Mapping.Attributes.HbmSerializer uses the
constructors of the attributes, which can not be called, to examine custom attributes loaded in the reflection-only context, use the CustomAttributeData class.
Well... I wasn't going to recode the HbmSerializer to make it work with the new .NET 2.0 API (not for now) so I have only one option left... create a new AppDomain an make the analysis there... but doing that is proving to be harder than I originally thought

Thursday, April 13, 2006

Visual Studio C# Express: Can Not Update with AttachDBFileName

Hi! Today I spent all morning teaching my father how to do CRUD with C# Express, but despite my best efforts, each time I ran my project, the database was completly empty... it turned out to be a "feature" of databases that are part of the solution (visual studio copies the database from the the source folder into the \bin\Debug\ folder each time the application compiles) the solution as as Humble Weeble posted, is to:

  • Select the database in Solution Explorer
  • Locate the 'Copy to Output' property in the Properties window
  • Change it from 'Copy always' to 'Do not copy'


Of course, after that, the problem will be that the connection string in you app.config is not pointing to the "output" database anymore, so AFAIK you will have to use an absolute path. (As I did)

Tuesday, April 11, 2006

NHiberntate Attributes + MSBuild = SQL DDL Generation

All right... I am building a system that uses NHibernate
as the persistence mecanism to save in to the database
and i decided that the best way to go... was to draw the
object model using the new VisualStudio 2005 Class Designer
the add the attributes... and then generate the SQL... but the
problem is that I have to run an extenal tool to do that... and I want the
DDL SQL to be re-generated each time I compile...
so I guess I will have to create a new MSBuild Task that integrates the
SchemaExport with MsBuild

So... I have written my task... but it does not work... now I am reading
How to: Write a Task to find out what am I missing... perhaps it is because
my task depends on an external assembly (NHibernate)... mmm no... that was not the problem... the problem was that I was not specifying the correct path for the source assembly.

It works.... well it worked... but only once... it seems that the source assembly gets blocked... lets think... how could that be fixed...

Sunday, April 09, 2006

Domain-Specific Language Tools... Class Designer Power Toys... where to start?

I am not sure where to start... should I create new functionality for the
Class designer? (perhaps exend it for object relational mapping?... or shuld I create my own
Domain-Specific Language

Saturday, April 08, 2006

Guidance Automation

I have been reading Guidance Automation
it seems to be an easy way to extend Visual Studio Functionality to a degree never seen before...

Wednesday, April 05, 2006

Why in ADO.NET the DataSet has to work with DataRow and not an Interface

Right now... there are two ways of handling an XML file in an easily and with object orientation:

  1. the intrusive (but more dynamic) TypedDataSet, generated bye the xsd.exe
  2. And object serialization generated by XSDObjectGenerator.

Each one has its advantages and disadvantages... but I wonder... why we can not combine the advantage of the by changing the internal design of ADO.NET... How?

Well the DataSet... is a collection of DataTables... the DataTable is a Collection of DataRows....
why it is not built to be more flexible? i.e. :

  • The DataSet could be an implementation of IDataSet
  • The default implementation for IDataSet " the DataSet" could be a collection of IDataTables
  • The default implementation for IDataTable "the DataTable" could be a collection of IDataRows
  • The default implementation of IDataRow... could be "the DataRow"

This way... I wouldn't have to worry about having to inherit from the DataSet, DataTable or DataRow classes... if that is not what I want... and this way I could create my own independant object hierarchy and... if I want to handle it as a DataSet well, all I have to do is implement the appropriate interfaces...

Of course... making Microsoft change the ADO.NET API to add this flexibility... could be too much to ask (after all there are a lot of applications built on top of the "legacy" ADO.NET) but... what about Mono? they could create an extensible non intrusive interface based dataset... don't you think?

Sunday, April 02, 2006

Implementing a DataProvider Independant ADO.NET DataAdapter

Okey... sometime ago... (in ADO.NET 1.x times) I wondered why data access for Odbc, for OleDb, for SQLServer or for Oracle had to be that different...
So i read Implementing a .NET Framework Data Provider
and I read ADO.NET: Building a Custom Data Provider for Use with the .NET Data Access Framework and I understood the processes that has to be followed to build a Data Provider... but I found something strange...

First... since ADO.NET 1.x it was possible to create a connection, and encapsulate that creation inside a factory that read the class type from a config file (you had to do-it-yourself but it was possible)... after that, using only interfaces and abstract classes common to all the Data Providers... it was easy to create a Command with the IDbConnection.CreateCommand.

But then... the problem arised... how to create the correct DataAdapter? (an IDbConnection couldn't create a IDbDataAdapter... an IDbCommand couldn't create an IDbDataAdapter... an IDataReader couldn't create an IDbDataAdapter...)
So... What is the difference between the DataAdapters and everything else? Why was it excluded from the "factory chain" (IDbConnection to IDbCommand to IDataReader)

With ADO.NET 2.0 that problem is solved, thanks to the new DbProviderFactory.CreateDataAdapter... but back then when only ADO.NET 1.x existed I began to think that maybe there was something really different between the IDbDataAdapter/DbDataAdapter and everything else... and I started looking for the difference... and the results were startling... there were absolutely NO difference... take a look at this example code from Microsoft... and now tell me why this "TemplateDataAdapter" couldn't have been included in the .NET Framework... perhaps as "DataProviderIndependantDataAdapter" or take a look at... for example... the SqlDataAdapter or the OleDbDataAdapter..., or the OdbcDataAdapter and tell me... exactly... what do the do that it is specific for SQLServer... or for OleDb... or for Odbc?

I you copy& paste the example code from Microsoft to one of your projects... and then in every place where you use the SqlDataAdapter you change it to the TemplateDataAdapter... what functionality do you miss? (if any) and... in the remote case that you do find something missing (I couldn't find anything)... is that functionality really database provider dependent?

Okey, okey, you want to know what is my point... well my point is:
  1. if the job for a DataAdapter is to represent a set of data commands and a database connection that are used to fill the DataSet and update a database (and I didn't invent that, Microsoft wrote in in the documentation for the SqlDataAdapter)
  2. and the DataSet is data provider agnostic...
  3. and it is possible to update the database using only interfaces like IDbConnection, IDbCommand, IDataReader... (and the base class DbDataAdapter already implements most of the logic needed to do that) then...


Why in .NET we don't have a DataProviderIndependantDataAdapter ? (as I said before it is so easy to build one, you just have to do a search an replace form the code of the TemplateDataAdapter in the example code from Microsoft but that doesn't answer the question).
Well, you might think... maybe it is just not worth the effort... maybe it is just a much better idea to have different DataAdapter classes... one for each DataProvider.... wrong.. that is simply not true...

Even with the new and improved DbProviderFactory.CreateDataAdapter there is a problem and the problem are the DataAdapter events

 adapter.RowUpdating += new SqlRowUpdatingEventHandler( OnRowUpdating );
adapter.RowUpdated += new SqlRowUpdatedEventHandler( OnRowUpdated );

Have you already seen the problem?

Even if you use the great new DbProviderFactory.CreateDataAdapter if you need to trigger provider independent logic using the the adapter.RowUpdating or the adapter.RowUpdated events... what do you do? there is no easy answer... (or maybe there is... to create a DataProviderIndependantDataAdapter )

First... I don't see why we have to use an "SqlRowUpdatedEventHandler" ? why ADO.NET does not include an "DataProviderIndependantRowUpdatedEventHandler" ? AFAIK the SqlRowUpdatedEventHandler has no special difference from the OleDbRowUpdatedEventHandler ... or do you see any? (of course, there is one you might say, the SqlRowUpdatedEventHandler uses SqlRowUpdatedEventArgs and the OleDbRowUpdatedEventHandler uses OleDbRowUpdatedEventArgs) so, the question now is... why we do not have a DataProviderIndependantUpdatedEventArgs ? is there an important difference between the SqlRowUpdatedEventArgs and the OleDbRowUpdatedEventArgs... well... there is none... they both inherit from RowUpdatedEventArgs... and add nothing... but they do create a problem... if you have a code like this:


 DbProviderFactory dataFactory =
DbProviderFactories.GetFactory("System.Data.SqlClient");
DbDataAdapter adapter = dataFactory.CreateDataAdapter();
DbCommandBuilder builder = dataFactory.CreateCommandBuilder();
builder.DataAdapter = adapter

// Create and fill DataSet (select only first 5 rows)
DataSet dataSet = new DataSet();
adapter.Fill(dataSet, 0, 5, "Table");

// Modify DataSet
DataTable table = dataSet.Tables["Table"];
table.Rows[0][1] = "new product";

// now... how do I add handlers?

if (adapter is SqlDataAdapter){
((SqlDataAdapter)adapter).RowUpdating += new SqlRowUpdatingEventHandler( OnRowUpdating );
((SqlDataAdapter)adapter).RowUpdated += new SqlRowUpdatedEventHandler( OnRowUpdated );
}

// now... that WAS NOT a DataProvider independent code...



Maybe we are missing something... lets take a loot at the guide for "Writing Provider Independent Code in ADO.NET"... mmm... no, the examples in Retrieving Data with a DbDataAdapter just do not cover the case when one needs to use the RowUpdating or the RowUpdated events

Of course that could have been easily fixed.. if we had a
DataProviderIndependantAdapter.. with its DataProviderIndependantUpdatedEventHandler and its DataProviderIndependantUpdatedEventArgs... so... why ADO.NET 2.0 does NOT provide one... don't ask me... but if you want to write code that is REALLY DataProvider Independant then you just have to create your own DataProviderIndependantAdapter.

ADO.NET vs ObjectRelationalMapping

So... why I don't like DataSets? Well, first of all, it is not that I think that DataSets are useless, it is just that I feel that Object Relational Mapping (ORM) is a far better solution, because, when properly built:

  1. You don't have to worry about primary keys of foreign keys (or the proprietary identity mechanism of a particular database)
  2. You don't have to worry about creating the master row first, and the detail rows later (master detail relationships are automatically handled by the ORM)
  3. You don't have to worry about saving all the changes in your data in the proper order(CRUD operations are automatically ordered following master detail relationships)
  4. You can easily add optimistic locking to you code without having to change all your SQL "UPDATE" or "DELETE" code.
  5. Your domain model is more abstracted from the idea that is being read from a database (a really good ORM should be able to read from a different relational source... or perhaps even a hierarchical source (Object Hierarchical Mapper?)... for example... from an XML file.
  6. You automatically get "cache" benefits, transparently, and sometimes you can even plug special caching strategies.

What Do I Know?

I am a software developer... I am certified (MCP) in WindowsForms.NET (with C#) development and I really like that platform (I specially love version 2.0)... I also know how to code in Java (I love Hibernate, I love Spring and somewhat like Velocity... but I think the future is in JavaServerFaces... or perhaps in Tapestry or... to compete with XAML... the best choice is OpenLazlo ) , I know a little of Delphi... and I have worked with Oracle and SQLServer... but to tell you the truth... the best technology that I have used is WebObjects (sadly IMHO Apple just doesn't seem to know the real value of it) and I really don't like to write SQL or Stored-Procedures... i believe the way to go is to use Object Relational Mapping and I really dislike ADO.NET DataSets... i mean.. they are good for web-services data exchange... but build the business rules of a software system.. there is nothing like the Domain Model pattern... with something like Apple's EOF... o Hibernate (or NHibernate)... so I am really sad that ObjectSpaces is delayed

TEOFL, Scholarship

Well, okey, my native language is Spanish... but I want to make the TEOFL exam... and maybe get an scholarship and to get a Masters Degree to USA... or Canada... or the UK... so, I need to practice with my english... and when my posts are not about philosophy... they will be about sofware engineering... and in the software development world... english is kind of the "lingua franca"... so I guess almost all of the post here will be in english

Startup! Iniciando!

Hi! This is my first entry in the blog!
Hola! Esta es mi primera entrada en el blog