Lately I've been working with Jersey, the JAX-RS (JSR 311) REST library.
I've worked with servlets, I've worked with JAX-WS annotations, and this seems to merge the two nicely.
My initial reaction was that it was super easy to create simple GET methods and return XML or JSON using JAXB annotated classes.
My second reaction was that I couldn't organize the services the way I wanted. For example, Jersey doesn't allow the resources to share the same path. This is exactly what I wanted. First to be able to put it only on the interface and share it in both the real and test resources. Then to be able to use a path annotation for the package and have path annotations only on the resource methods to differentiate them. And I'm guessing this would be possible if the path namespace was by method (or package/class/method) and not by class.
But although it works, path annotating interfaces is annoying because of the ugly logged warning messages.
My third reaction was that inputting data wasn't as simple as returning data. While the GET return handled the conversion to JSON/XML well, it proved more difficult for the incoming parameters. I was attempting to pass in both simple string parameters and complex JSON or XML data together with the later being automatically converted to the JAXB entities like the method return values. I eventually found that I could accomplish what I wanted using a POST with query params, where the body contained the structured data and the Consumes annotation (like the Produces) determined how to interpret it.
In other news, I've tried out the PMD (again) and FindBugs maven plugins, both helping to point out and prevent bugs in code.
In testing I saw results pointing out string concatenation in a loop, not closing resources for certain conditions, bad null check logic, unused fields and imports, empty catch blocks.
Finding faulty null logic with findbugs has been very useful to point out where these assumptions are different in code (passing null in one location and assuming it is not null in another).
My only complaint with findbugs is not being able to generate the html report outside the site goal like the pmd plugin.
I'm not sure if it exists, but what would be useful is a tool to point out classic test lines in code. I mean where two adjacent lines of code both define a value for the same field, but one is commented out and the other is not. This is especially true where one of the lines defines a hardcoded value.
But I guess the answer is to have a proper unit test that says "hey, you left this crap line here". And to use version control and review your commits thoroughly.
In testing I saw results pointing out string concatenation in a loop, not closing resources for certain conditions, bad null check logic, unused fields and imports, empty catch blocks.
Finding faulty null logic with findbugs has been very useful to point out where these assumptions are different in code (passing null in one location and assuming it is not null in another).
My only complaint with findbugs is not being able to generate the html report outside the site goal like the pmd plugin.
I'm not sure if it exists, but what would be useful is a tool to point out classic test lines in code. I mean where two adjacent lines of code both define a value for the same field, but one is commented out and the other is not. This is especially true where one of the lines defines a hardcoded value.
But I guess the answer is to have a proper unit test that says "hey, you left this crap line here". And to use version control and review your commits thoroughly.