Skip to the content.

knockout/pager - possible solution for DOM teardown

Possible solution to DOM leakage issue with PagerJS as I observed previously (Knockout is great, not sure about pager):

Add an afterHide event handler to tear down DOM after you leave for another page.  This handler can wipe the DOM from the template, triggering any cleanup registered with ko.utils.domNodeDisposal (see this post on StackOverflow).

Also, by clearing the reference to the view model within the Page object, the view model may be garbage-collected.   This should work for both lazy and non-lazy bindings: for lazy bindings, only the Page object would have a reference to the view model.  For non-lazy, someone else passed a reference to an existing view model so the view model will hang around.

You could attach this callback globally or within individual page bindings.

var afterHideCallback = function afterHideCallback(event) {
var elementToDestroy = event.page.element;

// clear out reference from the node with page binding so it can be garbage collected
event.page.ctx = {};

// shut down and wipe DOM from page to be hidden
$(elementToDestroy).children().each(function destroyChildElement() {
ko.removeNode(this);
});
};

// attach this event globally or in individual "page: {afterHide: ...}" bindings
pager.afterHide.add(afterHideCallback);
Read More

Time intervals and other ranges should be half-open

It is a good practice to treat time intervals as half-open inequalities: start <= x < end. Note the asymmetry.  This is how the Joda-Time API implements time intervals, and also how the SQL "overlaps" keyword works.  Also note that SQL "between" does not behave the same way.

The main reason this is important is to allow adjacent time intervals like 10:00-11:00 and 11:00-12:00, such that the instant of 11:00:00 falls in the second interval, but not the first.

It is a mistake to try turning the intervals into 10:00-10:59, etc. An instant like 10:59:01 would fall through the cracks between intervals, and end minus start would be 59 minutes rather than an hour.

With date or timestamp ranges, the same logic applies.  But there's a key difference: while end-users tend to think of time ranges intuitively as half-open, they often tend to think of date ranges as closed.  That is, users expect 10:00-11:00 and 11:00-12:00 to be adjacent and non-overlapping even though 11:00 is the end of one range and the start of another.  For dates, though, users tend to think of ranges like Jan 1-Dec 31 inclusive of the last day of the month/year.

So what to do?  There are two workable solutions.  One is to apply the same half-open inequality as before, adding a day to the user-specified end date.  So you would have something like 1/1/2013 <= x < 1/1/2014. Another is to truncate the input under test: 1/1/2013 <= trunc(x) <= 12/31/2013.  This gives the same results, because any instant up to and excluding 1/1/2014 will be included in the range.  The first approach is better because you don't have to remember to strip the time portion off of x before testing.

A wrong answer is to do something like 1/1/2013@midnight <= x <= 12/31/2013@23:59:59.  This assumes you know the exact precision of x, and there's a risk of a moment of time falling through the cracks again.  It's also just icky.

Integer ranges where the input under test is a decimal behave like dates and times, with the same options. Currency is a good example: you can use half-open ranges such that $100-200 and $200-300 are adjacent and non-overlapping and $200.00 falls in only the second range.  You could also have the user specify closed ranges like $100-199 and $200-299 and do the same thing with floor(x) or start <= x <= end + 1.  What you can't do is let $199.01 fall between the two adjacent intervals or let $200.00 match both ranges.

Incidentally, it seems like even when we're dealing with pure integer ranges, the common practice in Java is to use half-open intervals for things like substrings, and Python does the same for array slicing.  This seems to be the best practice for programming in general, for readability and minimizing chance of mistakes (see http://stackoverflow.com/questions/8441749/representing-intervals-or-ranges).
Read More

MicroStrategy Web SDK - WTFs

MicroStrategy's Web SDK did something I totally didn't expect: calling WebReportInstance.getPrompts() actually runs the report if the report has no prompts. WTF??

This is a big deal if the report takes a long time to run.  I would expect a method called getPrompts to, you know, get the list of prompts.

After a few rounds with support, I got the answer:


reportSource.setExecutionFlags(EnumDSSXMLExecutionFlags.DssXmlExecutionResolve);

Note that call is on the ReportSource before getting the ReportInstance.

Same goes for Documents.

Speaking of WTF's with MicroStrategy's Web SDK - even though all subclasses of WebResultSetInstance (documents and reports) implement a getMessage() method that returns a subclass of WebMessage, the method is implemented only on the individual subclasses.

In other words, instead of this:

class WebResultSetInstance { public WebMessage getMessage() }

you get this

class WebDocumentInstance { public WebDocumentMessage getMessage() }
class WebReportInstance { public WebReportMessage getMessage() }

This is annoying, because it prevents you from doing something like this

resultSetInstance.getMessage().removeFromInbox()

interchangeably with a report or document.  You need to treat both report and document cases separately.

Read More

KnockoutJS is great, not so sure about PagerJS

I've been using KnockoutJS for a while and I like it.  Knockout solves a clear need for me, in that without it or something similar, I'd have a mess of jQuery event handlers that breaks every time my HTML markup changes.  And because Knockout only solves for data binding, it is simple and easy to learn. It was easy to learn after using JSF on the server, with a similar two-way binding model.  I imagine .NET developers would have a similar transition from XAML, WPF, etc.

The one thing I am not as happy with was extending Knockout with PagerJS for client-side routing.  Although it's easy to get started with Pager, it doesn't seem to work well with the use case where only one page is active at a time and each page has its own isolated view model.  Pager does have mechanisms to lazy fetch templates and bind view models when a page becomes active, but the DOM and the view model don't go away when you move to another page.

This might be ok for some use cases like master/detail, or tabbed navigation on a single page.  But it seems like the growing memory footprint will cause problems when the DOM and view models for multiple independent pages are all in scope when they don't need to be.

Ideally there would be some way to dispose view models and DOM whenever we leave one page for another.
Read More

Web stacks that support change - but what kind?

I've recently made a transition from using dependency-injection heavy Java frameworks (Spring, Java EE) to full-stack dynamic language frameworks like Grails, Django, Ruby on Rails etc.

With these full-stack frameworks, you need much less code to get something done, between the magic of dynamic languages, and the elimination of dependency injection .  As a result it's easier to iterate rapidly as business requirements change, because there's fewer places to touch and fewer layers of indirection to sort through.

It made me wonder whether dependency injection in Spring and Java EE was intended to address a different kind of flexibility - flexibility to change out pieces of the tech stack.  I feel like this kind of flexibility is still a hangover from EJB 2.x.  The legacy use case was something like this: maybe EJB won't always suck so you put your business logic in a service that gets injected, and you could replace your POJO implementation with an EJB without changing the dependencies.

I feel like the flexibility from dependency injection was overrated - think about how often during the typical lifecycle you make a significant tech stack change, and think about how much time you spend wading through all those extra layers of indirection and configuration to make a functional change.  Also, these days with the proliferation of viable technologies, you are almost as likely to try something totally different (node.js, perhaps?) than to change out your ORM in isolation.

Yes, things are better today with annotations and better dynamic bytecode generation (e.g., no gratuitous interface with single implementation).  Still it feels like dynamic proxies are a clumsy mechanism for providing services like transactions, when compared with frameworks that provide these capabilities through dynamic language features or closures.  Given the choice, I think I'd rather go with a tech stack that makes it simpler and more efficient to address business requirement changes, even if it makes it harder to switch between EclipseLink and Hibernate.
Read More

C# envy

I was converting a small Java class to C#, and found a lot to like about it as a language.  For even a simple class - just collection and string manipulation, no frameworks - it seemed like some common tasks were easier, more elegant and less verbose.  Some examples:
  • Object initializers.  It's great to be able to do 'new Foo() { bar = "xxx", baz = "yyy" }' instead of having to call setters individually, or having to make a constructor passing in properties in a specific order.
  • Collection initializers, like "new List<int> {1,2,3}" instead of Arrays.asList
  • "var" keyword.  var foo = new List<int>, as opposed to redundant List<int> x = new ArrayList<int>().
  • Operator overloading, especially "==" for immutable value types like strings and integers.  You don't have to worry about things like "x == y" working if x and y are both ints but not Integers.  
  • Operator overloading for collection indexing.  Where's the "Get" method on a Dictionary?   Don't need one - it's square brackets, just like a Javascript object.
It looks like Java 8 will give us collection initializers but I'm not sure about others.
Read More

IE7 z-index issue

The IE z-index bug is old news by now, but since I struggled so much to understand it as a relative newbie to CSS, I wanted to preserve an explanation for posterity.

This is an illustration of the issue - also available via jsfiddle at http://jsfiddle.net/fhJxd/5/
 <div id="parent1" style="position: relative; width: 400px; height: 100px; background: #fcc">
<span>red header</span>
<div id="absoluteChild" style="position: absolute; top: 0px; left: 0px; width: 200px; height: 200px; z-index: 100; background: #cfc">
<span>On all browsers except IE7, the green box will be on top of the blue box on bottom and cover some of the text</span>
</div>
</div>
<div id="parent2" style="position: relative; width: 400px; height: 200px; background: #ccf">
<span>The text in this box is fully visible only on IE7. Even though the green box has a high z-index, IE7 still paints the blue box on top </span>
</div>

red header
On all browsers except IE7, the green box will be on top of the blue box on bottom and cover some of the text
The text in this box is fully visible only on IE7. Even though the green box has a high z-index, IE7 still paints the blue box on top

This will look different in IE7 and everything else. (The IE8/IE9 developer toolbar correctly replicate the IE7 behavior.)

What's going on here?

The first step is to understand that CSS z-index is not absolute - it's only relative within a "stacking context". Every positioned element with an explicit z-index value (i.e., not 'auto') creates a new stacking context. The z-index values are used to compare elements within the same stacking context; and stacking contexts are hierarchical, so the z-index of the positioned element that created the stacking context in the first place is compared against other elements in the same parent stacking context, and rendered accordingly with its descendants. So it's actually possible for an element with a z-index of 100 to be behind an element with z-index 1, if both are descendants of different positioned elements.

Once you understand that, it's much easier to understand the IE7 problem. IE7's bug is that it treats the unspecified z-index on parent1 and parent2 the same as z-index:0, so parent1 and parent2 each define a new stacking context inadvertently. The z-index of 100 on absoluteChild doesn't matter - the 100 is only used in comparison relative to other descendants of parent1. Since both parent1 and parent2 have the same z-index value, the DOM order breaks the tie.  Then parent2 plus its descendants will paint on top of parent1 and its descendants.

More recent browsers (IE8+, Chrome, Firefox, etc.) will not create a new stacking context for parent1 and parent2.   In that case, the z-index of 100 on absoluteChild causes it to render on top.

There is another good example here: http://therealcrisp.xs4all.nl/ie7beta/css_zindex.html

Read More

Eclipse debugger WTFs

Bug renders Eclipse debugger useless if class has/uses generics:
 https://bugs.eclipse.org/bugs/show_bug.cgi?id=344856

Also, Eclipse Glassfish plugins are incapable of launching in debug mode unless the domain.xml is set up to use the default debug port (9009).  if your domain.xml has been changed or customized, Eclipse will not be able to attach the debugger when launching the server in debug mode.  Unfortunately, the "connection refused" doesn't give you much of a clue what's going on.

http://stackoverflow.com/questions/8344454/oepe-glassfish-debug-connection-refused

Read More

Handling poison JMS messages in Glassfish - infinite loop WTF?

I found this post useful for understanding problems with handling "poison messages" in message-driven beans:

http://weblogs.java.net/blog/felipegaucho/archive/2009/09/24/handling-poison-messages-glassfish

The gist of it is that even if you think you caught an exception, your transaction still might roll back and cause the JMS message to be re-delivered.  (The MDB is an EJB, which will default to container-managed transactions, equivalent to having REQUIRED on each method.)

There are two small caveats to add:

1. The specific issue with JPA is that certain persistence exceptions mark the transaction for rollback, even if the exception is actually caught.  It matters not whether that JPA activity is happening directly in onMessage() or in a "sub-transaction".

2. Simply annotating another method in the same MDB with @TransactionAttribute(TransactionAttributeType.REQUIRES_NEW) does not actually create a separate transaction context.  If you are calling a method locally within the same EJB, "this" refers to the object itself, while the transaction behavior lives in the EJB proxy wrapper.  (See http://stackoverflow.com/questions/427452/ejb-transactions-in-local-method-calls)  So you actually have to put this transaction in a separate EJB and inject it into the MDB with @Inject or @EJB annotations.

And now for the "WTF" part.

The thing that really surprised me is the retry behavior.  If the MDB throws an exception, and the transaction rolls back as a result, Glassfish recognizes that there was an exception and will only re-deliver the message once.  There is some setting somewhere that controls how many retries are attempted, I believe.  (Haven't found it.)

If you catch an exception that marked the transaction for rollback (or the transaction was marked for rollback programmatically), the transaction still rolls back, and the message is redelivered.  However, the rollback without exception does not fire Glassfish's retry counter, so you end up in an infinite loop.

Either way the solution is the same, but still -WTF!
Read More

equals and hashCode on @Entity classes: just say no

I've come to the conclusion that you should avoid defining equals and hashCode on JPA @Entity objects unless you have a really good reason to.

Doing it right is non-trivial, with all the gotchas and caveats.  First, the logistics:
  • You can't use the primary key because multiple unsaved objects all have a null PK and would be "equal".  
  • Using the PK with object identity as fallback means an object won't be equal to itself before and after being persisted, and hash-based collections won't work properly if the hash code changes during the collection's lifespan.
  • A unique business key could work, if that's an option - but you have to remember to maintain the equals and hashCode method if those properties or relations change.  Also if that key isn't immutable, you have a similar problem as with PK + fallback above.
  • Creating a UUID in the constructor just to make equals and hashcode work is... yucky.
  • Often these methods are insufficiently covered by unit tests.
Second: there's the deeper question of what should "equals" mean in the context of a mutable business entity?  What comparison makes sense is often context-dependent.  Sometimes you might want to compare based on primary key, other times you might want to compare on some other property.

So what's the worst thing that happens if you just don't do anything, and take the default based on object identity?

Usually, you won't even miss these methods.  The default implementation will work fine unless you're trying to compare objects loaded in two different persistence sessions/transactions,with either equals() or contains().   I've found this is the exception case - much of my collection manipulation in practice is manipulating objects all loaded in the same session to hand off to the view layer, so HashSet recognizes duplicates and contains() works just fine.  And I almost never use an entity as a key in a hashmap.

The price you pay is more verbosity when you do need to compare objects between a session-scoped cache and the current request.  You have to remember that obj1.equals(obj2) doesn't work, and needs to be replaced with obj1.getId().equals(obj2.getId()).  Contains is a bit more verbose, and a utility method like containsById(collection, obj) might be helpful.  Some people will say it's confusing that equals doesn't "just work" but I find it less confusing to be explicit about what you're comparing on - and less confusing than a broken or unmaintained equals() method.

Finally, if this were C#, we wouldn't even have this discussion.  With closures and LINQ extension methods we would just say "collection.Where(obj => obj.id = foo.id)" and be done with it!

Related links:
http://community.jboss.org/wiki/EqualsAndHashCode

http://onjava.com/pub/a/onjava/2006/09/13/dont-let-hibernate-steal-your-identity.html?page=3

http://stackoverflow.com/questions/1929445/to-equals-and-hashcode-or-not-on-entity-classes-that-is-the-question

http://stackoverflow.com/questions/1638723/equals-and-hashcode-in-hibernate

https://forum.hibernate.org/viewtopic.php?t=928172

http://www.ibm.com/developerworks/java/library/j-jtp05273/index.html: "For mutable objects, the answer is not always so clear. Should equals() and hashCode() be based on the object's identity (like the default implementation) or the object's state (like Integer and String)? There's no easy answer -- it depends on the intended use of the class... It is not common practice to use a mutable object like a List as a key in a HashMap."

http://burtbeckwith.com/blog/?p=53
Read More

Code coverage - unintended benefit

Unit test coverage is not a panacea.  You can reach 100% unit test coverage without a single assertion about outputs or business logic, in which case the tests aren't that useful.

Or are they?  Just the act of executing every single line and branch of code does provide one important value: it demonstrates that you can show what inputs or conditions trigger which parts of the code.

More importantly, the act of getting there helps reveal sections of code and conditionals that you might not even need.  Having a coverage target also gives you incentives to stay "clean" and avoid gratuitous not-null checks or try/catch blocks, two pet peeves of mine.  (I've seen too many not-null checks that looked defensive at first but in reality just kicked the can down the road, further obscuring the real problem.)

The other benefit I've found is that a coverage goal encourages refactoring that you should be doing anyway, because well-factored code is easier to unit test.  For instance I had one JSF/JPA application where I was accessing JPA EntityManagers directly in the JSF managed beans.  This made unit tests on the managed beans annoying because I had to mock out EntityManager.createQuery and dozens of Query.setParameter calls for each JPA action.  By pulling the JPA actions into a separate DAO layer, I could just mock a single call to myDAO.getStuff(arguments).  Plus, after isolating the DAO, I could then write an integration test on the DAO hooking up to a real database.

Other related links:
http://www.wakaleo.com/blog/316-code-coverage-as-a-refactoring-tool
http://codebetter.com/patricksmacchia/2009/06/07/high-test-coverage-ratio-is-a-good-thing-anyway/

Read More

Primefaces AJAX callbacks: onstart vs. onclick

I just learned the hard way that onstart and onclick are not the same thing.

In particular, a "return ..." has very different semantics in both cases.

Consider this code:
<p:commandLink action="#{bean.method}" onstart="return func()" ...>

If "func()" return false, this code will abort the AJAX request and bean.method() won't get called.
If "func()" returns true, the AJAX request processes.
If you replace onstart with onclick, the AJAX request will abort even if func() returns true.

That's because the Primefaces puts the code to generate the AJAX request in the onclick handler, pre-pending your code from the p:commandLink onclick before it.  If your code returns, the AJAX request never gets sent.
Read More

Atlas Debugged: The Fountainhead, YAGNI and "clean code"

The last time I re-read The Fountainhead, I felt like many of the ideas in the book could be useful for software developers.  It seemed like there are many parallels between the values demonstrated by the book's hero, Howard Roark, and good development practices.

Many of the ideas dovetail nicely with some Agile development principles, especially the various "keep it simple" ones like YAGNI and DRY (along with DRY's cousins,"once and only once", and "three strikes and refactor").  Roark's designs are driven entirely by purpose, function and constraints, with an open disdain for non-functional or ornamental additions.  For example, about the house he builds for Austen Heller, Roark says:

“Every piece of it is there because the house needs it – and for no other reason… You can see each stress, each support that meets it... But you’ve seen buildings with columns that support nothing, with purposeless cornices, with pilasters, moldings, false arches, false windows… Your house is made by its own needs. The others are made by the need to impress. The determining motive of your house is in the house. The determining motive of others is the audience.”

Keeping things simple and focused is hard work, just as agile development and YAGNI is not an excuse for sloppiness.  If done correctly, it should be quite the opposite.  In Roark's designs, “Not a line seemed superfluous, not a needed plane was missing. The structures were austere and simple, until one looked at them and realized what work, what complexity of method, what tension of thought had achieved the simplicity.” [emphasis added]

This made me think of the quote in Uncle Bob's Clean Code: "Learning to write clean code is hard work... You must sweat over it. You must practice it yourself, and watch yourself fail. You must watch others practice it and fail."

There were some other ideas I'm hoping to examine in more depth:
- right tool for the job - using technology idiomatically vs. legacy patterns with new technology
- the architect as a hands-on practitioner (vs. ivory tower)
- leveraging innovations - new methods, technologies etc.
- professional satisfaction / motivation
- interactions with business stakeholders, "people skills" and organizational politics
- making best of bad situations - looking for best possible solution even if you don't agree with the business problem to be solved
Read More

Primefaces global AJAX events

You can use jQuery global AJAX events with PrimeFaces to refactor behaviors that appear on multiple components.  A good example is if you have a data grid with multiple p:commandLinks and other controls that execute different methods and re-render the grid, and need to run the same onComplete in all of them.

For example, if you start with something like this:

<h:panelGroup id="grid">...</h:panelGroup>
<p:commandLink update="grid" actionListener="#{bean.update1}" onComplete="updateGrid()"/>
<p:commandLink update="grid" actionListener="#{bean.update2}" onComplete="updateGrid()"/>
<p:commandLink update="grid" actionListener="#{bean.update3}" onComplete="updateGrid()"/>

You could pull the updateGrid() statement out into an AJAX listener like this

jQuery(document).ajaxComplete(function(e, xhr, opts) {
$response = jQuery(xhr.responseXML);
if ($response.find("div[id$='grid']").length > 0) {
updateGrid();
}
});
By using ajaxComplete and parsing the XHR response object, you can see which DIV was going to be impacted by the partial update.  That way, if you had some other AJAX controls (say, an autocompleter) that you didn't want to trigger the updateGrid() function, you could filter that out.  Another option would be to set global=false on the specific Primefaces components that you don't want to fire the global jQuery ajaxComplete event.  The Primefaces autocompleter doesn't support this in Primefaces 2.x but does in 3.0.  
Read More

Primefaces p:ajax and jQuery AJAX events

All of Primefaces' JSF components use the jQuery AJAX engine, which means you can catch global AJAX events with $(document).ajaxStart and $(document).ajaxStop.

Standard JSF components like h:selectOneMenu will also go through jQuery when using a p:ajax facet (vs. standard f:ajax).

Conversely, jsf.ajax.onEvent does not appear to work with Primefaces components as it does for f:ajax.  

See also:
http://stackoverflow.com/questions/6166352/fajax-and-jquerys-document-ajaxstart-dont-work-together
Read More