Search This Blog

Thursday, September 6, 2012

C11/C++11: Building blocks for the future

2012-08-27, SDTimes, by Larry OBrien
Click here to see the article


With single instructions taking less than a nanosecond to execute, and data chugging its way over the Internet with latencies in the dozens and hundreds of milliseconds, there’s ample room in the middle for a productivity-enhancing, language-generalizing, safety-increasing virtual machine. Except when there isn’t, as when pushing a huge dataset through a mathematical transformation, or when the abstraction provided by the VM or language allocates memory that is not strictly needed by the domain logic.

Apple has proved the popular appeal of rapid OS evolution (particularly in the mobile market), and new operating system capabilities are not generally available to the managed platforms. The JVM is particularly troubled by lowest-common-denominator capabilities, but Microsoft too has failed to deliver on its decade-old promise to make the CLR an equal, if not the primary, interface to Windows. Last year’s panic about Microsoft “abandoning .NET” proved to be overblown (an overreaction to “what was not said” at a few presentations), but it is absolutely true that Microsoft is emphasizing C++ development more prominently than it has in a decade.

In addition, while there’s plenty of time in the middle ground between clock speeds, network latencies and user-perceivable performance, the “put a VM in it” model introduces a significant energy penalty, since most VM instructions translate into multiple native instructions. Anyone who’s looked at a teardown of a tablet or a small notebook knows the astonishing volume dedicated to the battery, and battery life is the most important feature for smartphone buyers.

Finally, although C and C++ get little respect from the academic language community, it is hard to ignore the fact that they remain at or near the very top of languages mentioned in help wanted ads: knowing C and C++ remains one of the very most valuable skills to have in your portfolio.

There are two major criticisms of C and C++: that the languages are not highly productive for application development, and that they are too complex. The latter is often made more emphatic by the complaint that the two languages are “designed by committee.” Both critiques have a good dose of truth. C and C++ have traditionally required significant effort to manage memory, bugs have a tendency to be more opaque than in managed languages, and the edit-run-debug cycle can be very slow in larger codebases.

And for complexity? Well, yeah. C++’s template facility is Turing complete and high-performing, which has led to the no-doubt complex technique of “template metaprogramming.” And both languages show the strata of the many epochs of programming during which they’ve evolved: integer types of ever-increasing sizes, archaic keywords like register and inline (I know, they’re not archaic everywhere), and perhaps above all the evolution of character representations and the string libraries, which have had to be improved to deal with both Unicode and buffer-overflow vulnerabilities.

Having said all that, one of the most interesting things about C++11 is that “making C++ easier to teach and learn” was one of the two leading goals, according to language designer Bjarne Stroustrup. The other was to “make C++ a better language for systems programming and library building.”

‘More than a billion lines of code’ 
Stroustrup and others worked within the bureaucratic process of ISO standardization. The C and C++ languages are standardized under the International Organization for Standardization (ISO), the gold standard of technical standards groups.

The C and C++ ISO standards groups have been working since the mid-1990s. ANSI, the American National Standards Institute and the United States representative to ISO, worked on a C-language standard all the way back to the early 1980s. C has had three standards: C89, C99 and C11. (One often hears of the 1989 and 1999 standards called “ANSI C,” which is an anachronism now that the language is under ISO.) C++ has also had three: C++98, C++03 and C++11, as well as a library-focused “Technical Report” release in 2005.

The overwhelming theme of the C/C++ standards is that they’re conservative. C, in particular, is universal: If there’s a chip, there’s a C compiler for it. Every aspect of the language that has to do with memory or implies something about hardware implementation is going to be carefully scrutinized by literally hundreds of companies worrying, “What about me?”

And there’s a lot of C and C++ code out there. Coverity, which makes a static analysis tool, in 2009 had approximately 700 customers “with somewhat more than a billion lines of code among them.” (The quote is from the article “A Few Billion Lines of Code Later,” which is an absolute must-read for anyone interested in understanding the C and C++ world.) While Coverity is a successful and well-regarded company, there’s no way its customer base represents more than a tiny fraction of the total amount of C and C++ code used and depended upon throughout the world. So every change, no matter how trivial and obviously “correct” (for instance, not requiring a space between the closing right brackets in template expressions) is going to break someone’s code, somewhere, and cause pain. The awareness of how troublesome changes can be is palpable during discussions at standards meetings. That anything is approved can seem to an onlooker to be the biggest triumph of such meetings.

Language directives 
But big changes have come to the languages, albeit slowly. C11 boasts better Unicode support, more control of alignment, and, most importantly, an improved and standardized memory model that supports thread-local storage and uninterruptible object access (a.k.a. “atomic” access, the guarantee that an access will not be interleaved with access from some other thread). The Unicode support, where a “u8”, “u” or “U” prefix on a string literal specifies the encoding, will probably be the most visible giveaway in C11 code.

The changes in C++ are considerably more visible. In addition to the directive of “making C++ easier to teach and learn” mentioned earlier, the other major directive was to “make C++ a better language for systems programming and library building.” Together, these two directives pushed the language to adopt a lot of lessons from the mainstream managed languages while never sacrificing the emphasis on performance or ability for low-level control.

C++ changes 
Type inference: Nowhere are the lessons of the mainstream languages clearer than in the adoption in C++ of type inference. The clearest visible signature of C++11 code will be the use of the auto keyword on the left-hand side of assignments. While this may feel a bit like the dynamic typing of a Python or Ruby, it’s not: The type is fixed and statically determined. The auto is just a convenience to avoid cumbersome “finger-typing” that often just restates what you’ve typed on the right-hand side (auto instead of, e.g., map>::iterator or function). This “type inference” will be familiar to users of C# or Scala but will take some getting used to by others (especially when the compiler implementation is unable to infer the correct type and complains about it. Such complaints are a common source of questions on Stack Overflow for type-inferred languages). 

Lambdas: Type inference saves a good amount of space when it comes to complex parameterized types, but is also an important enabler of what’s probably the biggest functional improvement to C++11, which is support for lambda functions. 

Anonymous inline functions are again familiar to users of some of the more well-designed managed languages, but are even more universal in the world of JavaScript. Such “lambda” functions are wildly useful, particularly when used with Standard Template Library higher-order functions such as find_if. The syntax is a little more cumbersome than what one sees in other languages, as C++ requires you to specify the “captures” when the lambda function is being used as a closure. For example, [foo &bar](bat) -> int { …function body…} captures foo by value, bar by reference, takes bat as a parameter, and returns an integer (the return type can be elided in most cases). While the flexibility for different capture semantics is very nice, one can imagine stumbling the first several times one encounters that type of code. 

Range-based for: The third highly visible feature in C++11 is “range-based for.” This compact way of specifying a loop can work with any type that supports the concept of a “range” (essentially, any type that has functions begin and end that yield iterators). All containers, arrays, initializer lists and regex matches can be looped on with a simple for(auto p : ps). This provides remarkable savings over the long type signatures of C++, to which developers using the standard containers have become accustomed. 

What’s more likely to change the experience of programming is the stylistic imperative to “always use smart pointers or non-owning raw pointers.” Smart pointers in C++11 use the language’s template mechanism to provide reference-counted memory management, a type of garbage collection that is not as foolproof as the fully automatic garbage collection of managed languages, but which is much easier than fully manual memory management and is compatible with powerful C++ idioms such as Resource Acquisition Is Initialization (RAII).

C++98 had a flawed implementation of reference counting using the auto_ptr type. This type is now deprecated; instead, one uses unique_ptr for the owning relationship, and weak_ptr and shared_ptr for cycles and shared ownership. The great challenge with reference counting is that if you create a cycle in the object graph, the memory will never be freed; in C++11, the programmer is still responsible to correctly build the ownership graph.

Concurrency: C and C++ were invented in the era of single-processor machines, but are still the languages of operating-system implementations for the foreseeable future. As mentioned previously, the C11 memory model has been improved to be multi-thread safe, and C++ builds on that with a number of primarily library-based features.

Now, std::thread runs any callable object and runs it asynchronously. More interesting are the techniques associated with futures, which provide a straightforward conceptual model. (A Future is an IOU for a value; calling get on it blocks until the value is ready.) All the usual caveats (and flashing red lights and dire warnings nailed to doors in the dead of night) about the pitfalls of references and pointers and object lifetimes continue to apply.

The standard additionally specifies several types of Mutex, thread-local data, and thread-safe initialization. It is by no means a silver bullet for concurrency, but instead is an attempt to be a solid foundation for further work. 

In addition to the previously mentioned headline changes, there are a number of medium-impact features. At the border of the two are “move semantics,” which allow ownership of a structure to be changed rather than copied and then deleted; a potentially big performance win with large structures, but one from which you can only benefit by writing new code. 

Consistent initialization using braces (int x{5}; int y[]{1, 2}) is another “easier to learn” feature, even if the semantics are slightly different for “aggregates” (arrays and structs) than they are for non-aggregates, and one is allowed to use fewer brace-initializers than needed, triggering value-construction (and, I think, making for pretty confusing code). 

The new nullptr keyword is probably self-explanatory: It replaces 0 or NULL with a constant of type nullptr_t that can only be converted to a pointer or a Boolean (never, ever, to an int. Hooray!).
There are additional changes that I think of as minor fixes, such as the previously mentioned “right brackets close properly” fix. 

In addition to the library-based threading and futures discussed previously, the new standard library includes hash tables, improved pseudo-random number functions, and regular expressions (regex adherence is said to be to “modified ECMAScript,” which, according to Scott Meyers’ “An Overview of the New C++” presentation materials, is “essentially a standardized version of Perl RE syntax”). 

Still a work in progress 
While the new C and C++ language standards are extensive and impressive, it’s fair to point out that C and C++ remain languages in which programmers can perform “dangerous” memory and thread operations: high-performance, low-level systems engineering requires that access. The hope, though, is that the new features will go a long way to avoiding the “slip of the mind” defects that, regrettably, we all introduce.

A major feature—some would say the major feature—of this version of C++ was a feature for templates called “Concepts.” It was pulled from the standard in 2009 because a majority of committee members felt that it was still untried and risky. Although a great deal of work has been done on Concepts over the years, it seems at this point that the feature may never make it into the standard or, if it does, only after a massive reworking.

Work has already begun on the next version of the standard. Aside from Concepts, major features will certainly include further steps toward a standard garbage collection model and some form of runtime reflection.

The improved memory model, reference-counting, and more concise code enabled by type inference and lambdas make the language more attractive and modern. Systems-level and performance-oriented programmers will certainly gain from enabling the latest language extensions in their compilers. The page seems a pretty actively maintained reference for compiler support.

Books on C/C++11 are mostly still in the development pipeline. An important exception is Anthony Williams’ excellent “C++ Concurrency in Action,” which goes into the memory model and related areas in great depth. The best general overview may be the course notes of Scott Meyers, author of the “Effective C++” series.

If C and C++ are undergoing a renaissance, it can only continue if the languages trigger enthusiasm in both new and old developers. The committees have done their work, now it’s time for the market to render judgment.

Saturday, March 17, 2012

Oracle lays out long-range Java intentions

2012-03-15, InfoWorld, by Paul Krill
Click here to see the article

Interoperability goals include a multilanguage JVM and improved Java/native integration

Oracle's wish list for Java beyond next year's planned Java SE (Standard Edition) 8 release includes object capabilities, as well as enhancements for ease-of-use, cloud computing, and advanced optimizations.
JDK (Java Development Kit) 10 and releases beyond it are intended to have a unified type system, in which everything would be made into objects, with no more primitives, according to an Oracle slide presentation entitled "To Java SE 8 and Beyond!" posted on the QCon conference website. Oracle cites an ambitious list of goals for Java in the presentation, which was apparently delivered by Oracle Technology Evangelist Simon Ritter last week.


A slide entitled "Java SE 9 (and Beyond)" reveals goals for interoperability, including having a multilanguage JVM and improved Java/native integration.


Other languages besides Java, such as JRuby, Scala, and Groovy, already have become popular on the JVM in recent years. A timeline provided in the presentation has JDK 9 arriving in 2015, JDK 10 in 2017, JDK 11 in 2019, and JDK 12 in 2021. The presentation declares, "Java is not the new Cobol."

Ease-of-use goals for Java include a self-tuning JVM and language enhancements. Advanced optimizations eyed include the unified type system and data structure optimizations. Under the subheading, "Works Everywhere and With Everything," Oracle lists goals like scaling down to embedded systems and up to massive servers, as well as support for heterogeneous compute models.

For cloud environments, a hypervisor-aware JVM is noted as an intention for JDK 9 and above, including cooperative memory page sharing. Multitenancy goals for JDK 8 and beyond include improved sharing between JVMs in the same OS and per-thread/threadgroup resource tracking and management.


The vision for language features in JDK 9 includes large data support, with 64-bit and large-array backing. JDK 10 and above would feature true generics, function types, and data structure optimizations, including multidimensional arrays.

Heterogeneous compute models planned for JKD 9 and beyond include Java language support for GPU (graphics processing unit), FPGA (field programmable gate array), off-load engines, and remote PL/SQL.

Also called for in the Oracle presentation is "open development," in which prototyping and research and development would be done in OpenJDK, which is the open source process for Java. Plans also call for greater community and cooperation with partners and academia.

Links:
  • To Java SE 8 and Beyond! , by Simon Ritter
  •  Some JDK 8 features (due 2013):
    • Project Lambda : Support programming in a multicore environment by adding closures and related features to the Java language; bulk parallel operations in Java collections APIs (filter/map/reduce)
    • Project Jigsaw : Module system for Java applications and for the Java platform

Thursday, March 8, 2012

Why should you use Unchecked exceptions over Checked exceptions in Java

The debate over checked vs. unchecked exceptions goes way, way back. Some say it’s one of the best features Java included. Others say it was one of their biggest mistakes[1].

It looks like the debate is over. In this post I will try to include the links to articles and books which speaks about this topic. I am not an expert voice in this, but I will try to explain you why did I reach the this conclusion.

So, we are talking about,

Unchecked exceptions :
  • represent defects in the program (bugs) - often invalid arguments passed to a non-private method. To quote from The Java Programming Language, by Gosling, Arnold, and Holmes : "Unchecked runtime exceptions represent conditions that, generally speaking, reflect errors in your program's logic and cannot be reasonably recovered from at run time."
  • are subclasses of RuntimeException, and are usually implemented using IllegalArgumentException, NullPointerException, orIllegalStateException
  • a method is not obliged to establish a policy for the unchecked exceptions thrown by its implementation (and they almost always do not do so)

Checked exceptions :
  • represent invalid conditions in areas outside the immediate control of the program (invalid user input, database problems, network outages, absent files)
  • are subclasses of Exception
  • a method is obliged to establish a policy for all checked exceptions thrown by its implementation (either pass the checked exception further up the stack, or handle it somehow)

The above are as told in Java Practices Page[2].

In many of the projects I have worked on, I have seen different ways of coding and various different strategies, code formatting, class naming styles, databases and technologies. The one thing that remained same was, Exceptions. All the projects had custom exceptions, created by extending Exception class!

I am sure that most us of know the difference between checked and unchecked exceptions, but very few thinks carefully before using them. I wanted all the details to be listed in single page so that I could convince my team to switch to Unchecked exceptions.

In his famous book, Clean code: a handbook of agile software craftsmanship[3], Robert C. Martin writes the below lines supporting Unchecked Exceptions.

The debate is over. For years Java programmers have debated over the benefits and liabilities of checked exceptions. When checked exceptions were introduced in the first version of Java, they seemed like a great idea. The signature of every method would list all of the exceptions that it could pass to its caller. Moreover, these exceptions were part of the type of the method. Your code literally wouldn’t compile if the signature didn’t match what your code could do.

At the time, we thought that checked exceptions were a great idea; and yes, they can yield some benefit. However, it is clear now that they aren’t necessary for the production of robust software. C# doesn’t have checked exceptions, and despite valiant attempts, C++ doesn’t either. Neither do Python or Ruby. Yet it is possible to write robust software in all of these languages. Because that is the case, we have to decide—really—whether checked exceptions are worth their price.

Checked exceptions can sometimes be useful if you are writing a critical library: You must catch them. But in general application development the dependency costs outweigh the benefits

The last line is most significant, where he speaks about the general application development, Lets take an example,

If you have to read an XML file using a DOM Parser, we need to deal with some checked exceptions[5] like ParserConfigurationException, SAXException and IOException . The API developer thought that if the XML was invalid, they should notify so that the consumer of the API (ie, the application developer) can decide how to handle the situations.

Now, If You have some alternatives to proceed with the normal logic, you could do that, other wise you should catch these checked exceptions and throw and Unchecked exception. This way the method signatures will be clean also, we are stating that if the XML is invalid we can not do much, and we are stopping the processing. Let the error handler written at the top layer take the appropriate decision on what to do.

So, all we need to do is to create out custom exception class by extending RuntimeException.

In the Java Tutorial hosted by Oracle, there is an interesting page about this debate[4], the page ends with the line, If a client can reasonably be expected to recover from an exception, make it a checked exception. If a client cannot do anything to recover from the exception, make it an unchecked exception.

I have also found a few articles supporting this,

The Tragedy Of Checked Exceptions by Howard Lewis Ship
Exceptions are Bad by Jed Wesley-Smith

Also, a few articles on general exceptional best practices,

Guidelines Exception Handling By Vineet Reynolds
Exceptional practices By Brian Goetz

Wednesday, March 7, 2012

Spring Java developers get Hadoop integration

2012-02-29, InfoWorld, by Paul Krill
Click here to see the article

VMware's Spring Hadoop offers link between Spring development framework and Hadoop distributed processing platform

VMware on Wednesday is introducing its Spring Hadoop software, which is intended to make it easier for Java developers utilizing the Spring Framework to leverage Apache Hadoop data processing capabilities.

Developers can perform MapReduce queries in Hadoop from Spring, then have triggered event results based on Hadoop, said Adam Fitzgerald, VMware director of developer relations. Also, developers can build complex workloads that interact with Hadoop either as individual MapReduce requests or as data-streaming results. 

Hadoop is Apache's open source platform for scalable, distributed computing, while Hadoop MapReduce is a programming model and framework for processing large sets of data. Spring Hadoop will be available on VMware's springsource.org website and is being released under an Apache open source license.

"Spring Hadoop was created to make it more straightforward for enterprise Java developers to use Apache Hadoop," Fitzgerald said. With the integration, VMWare has taken Spring's dependency injection mechanism for linking related objects and applied it to Hadoop. This saves developers time and increases productivity, testability, and portability, Fitzgerald said.

Spring Hadoop enables execution of MapReduce, Streaming, Hive, Pig, and Cascading jobs via the Spring container. Hadoop Distributed File System data access is enabled through JVM scripting languages, such as Groovy and jRuby. Also, declarative and programmatic support is offered for Hadoop tools, including FsShell and DistCp.



Monday, March 5, 2012

Is Java Dead? Heck No!

2012-02-13, eWeek.com, by Darryl K. Taft
Click here to see the article

A recent JBoss post dredges up the old “Java is dead” argument by calling out the need for polyglot programming or using other languages in addition to Java. But Java is not going anywhere.

For the record: Java is not dead, nor is it dying. It is, however, mature, and perhaps a little grumpy and set in its ways.

Yet it seems one of the best ways to draw attention to a post or commentary on Java and programming is to use the Words “Java” and some variation of “dead” in the headline.

For instance, recently Mark Little, senior director of engineering at Red Hat, wrote a blog post entitled: “JBoss polyglot - death of Java?"

And although the headline suggests that Red Hat’s JBoss unit is pushing a polyglot programming strategy of using several languages for different projects, the gist of the post is that the company is not trying to move away from Java. In fact, Little makes it plain that “we're as committed to Java today as we've ever been.”

Little noted that JBoss is doing projects with languages such as Ruby, Scala, C/C++, Erlang and others. In the post, he said:

… you can't fail to have noticed that we're doing quite a bit of work with languages other than Java. Those include Ruby, via TorqueBox, Clojure with Immutant, C/C++ in Blacktie, Scala in Infinispan, Ceylon and my own personal favorite Erlang. (OK, that's still more a pet project for me than anything else.) But does this mean that we're turning our backs on Java? No, of course it doesn't! If anything it shows our continued commitment to Java and the JVM because all of these approaches to polyglot leverage our Java projects and platforms.

Little added that some of the efforts Red Hat has been involved in that stress its commitment to Java include: Putting JBossAS 7 onto OpenShift; various discussions and presentations on how core enterprise capabilities transcend languages and Java is a great solution; JBossEverywhere is all about making JBoss’ core services and projects available on a wider range of devices and platforms, some of which are not Java-based but many of which are; the company’s increased presence and adoption at JavaOne; and its efforts to define a common fabric/platform across deployments, which will be based on Java and more in line with ubiquitous computing.

Then there's the number of times that our competitors keep telling people that Java and EE6 are dead and we keep having to set the record straight,” Little said.

So Little makes no bones about acknowledging continued support for Java. He’s just saying Red Hat, like so many other companies, is using other languages. Oracle, the owner and steward of Java is even encouraging Java supporters to use other languages on the Java Virtual Machine. The Da Vinci Machine Project is an effort to extend the Jave Virtual Machine (JVM) with first-class architectural support for languages other than Java, especially dynamic languages.

There Is a Need for Multiple Languages

There are several languages supported by the JVM, including Clojure, Groovy, Scala, JRuby, Jython, Rhino and AspectJ, to name some.

"There is clearly a need for multiple languages,” Mike Milinkovich, executive director of the Eclipse Foundation, told eWEEK. “Mark's post was absolutely right that the JVM is providing the platform for a renaissance of programming language development and adoption. It will, however, be interesting to watch how the JVM-polyglot scenario unfolds, as there is also a cost to projects if they try to build and maintain applications using multiple languages, even if they're based in the same runtime architecture. I suspect it will take a while before best practices emerge."

Essentially shrugging on the discussion, Grady Booch, chief scientist for software at IBM Research and a software and computer technology historian, quipped: “A person who knows several languages is multilingual. A person who knows two languages is bilingual. A person who knows one language is an American. I know of zero projects of any interesting economic value that are not multilingual. But that doesn't portend the death of Java.”

And for his part, James Gosling, the creator of Java, candidly told eWEEK, “Java, the programming language, was always something of a scam to convince C/C++ programmers this brave new world was something that they could understand. All the magic is in the JVM, and I'm thrilled by all the other languages using it, although I've only dabbled in them since none has really converted me. Scala came very close for a while, and I used it in a biggish project, but I ended up reverting. Scala has all sorts of built-in ideas about how to do various things like pattern matching; if it's not quite what you want, you end up fighting. I conceptually like Clojure, particularly its use of immutability, but I exhausted my lifetime tolerance for parentheses when writing my Ph.D. thesis.”

This whole thing about Java and death puts one in mind of Anne Thomas Manes’ 2009 post that SOA is dead, which shed light on services and was not intended to nail the coffin on service orientation—which is still alive and well. And like the constant attempts to bury the mainframe, the rumors of Java’s demise have been greatly exaggerated.

Some Say Rumors of Java's Demise Have Been Exaggerated

Like the mainframe, Java isn’t going anywhere. It is the No. 1 language for enterprise development. IT organizations ask for it for major enterprise projects. There are more Java jobs around than any other. There continues to be a huge demand for Java developers, and as such there is a large base of Java developers and new folks who are learning the language. It’s a stable language that enables developers to create well-structured code that is easily maintained.
There also is a host of good tools for Java. Java has a huge ecosystem and so many of the surrounding projects and products that support mobile platforms and big-time enterprise computing are Java-based: Android, Hadoop, Jenkins, Cassandra and HBase, to name a few.
Also, Java’s position in January 1996 was No. 5 on the TIOBE Index of the most popular programming languages in use by developers. In January 2006, it was No. 1 and has hovered around the top ever since. The most recent TIOBE Index shows Java at No. 1, but was flat for growth.
At 17 years old, Java is certainly mature and beginning to show signs of age in that its architecture, along with the JVM, can be restrictive for some new programming paradigms. Oracle and the Java Community Process (JCP) try to address these issues with updates and changes to the Java language and platform. So despite losing a bit of its luster, the Java standard remains strong.
For instance, at the Free and Open Source Software Developers’ European Meeting (FOSDEM) in February 2011, Stephen O’Grady, an analyst and co-founder at RedMonk, said, “Java is no longer as popular; what Java is, is the most popular.” O’Grady’s FOSDEM 2011 slides can be found here.
However, in a post from November 2010, Mike Gualtieri, then a Forrester analyst, called Java a dead end. The post, entitled “Java Is A Dead-End For Enterprise App Development,” reads:
“Java is not going away for business applications, just as COBOL is not going away. Java is still a great choice for app dev teams that have developed the architecture and expertise to develop and maintain business applications. It is also an excellent choice (along with C#) for software vendors to develop tools, utilities and platforms such as business process management (BPM), complex event processing (CEP), infrastructure as a service (IaaS), and elastic caching platforms (ECP). Software such as operating systems, databases, and console games are still mostly developed in C++.”
Gualtieri, who is now a vice president of marketing at Progress Software, also said in that 2010 post:

Java development is too complex for business application development. Enterprise application development teams should plan their escape from Java because:
  • Business requirements have changed. The pace of change has increased.
  • Development authoring is limited to programming languages. Even though the Java platform supports additional programming languages such as Groovy and JRuby, the underlying platform limits innovation to the traditional services provided by Java. You can invent as many new programming languages as you want, but they must all be implementable in the underlying platform.
  • Java bungled the presentation layer. Swing is a nightmare, and JavaFX is a failure. JSF was designed for pre-Ajax user interfaces even though some implementations such as ICEfaces incorporate Ajax. There is a steady stream of new UI approaches reflecting Java's lack of leadership in the presentation layer.
  • Java frameworks prove complexity. Hibernate, Spring, Struts and other frameworks reveal Java’s deficiencies rather than its strengths. A future platform shouldn't need a cacophony of frameworks just to do the basics.
  • Java is based on C++. Is this really the best way to develop enterprise business applications?
  • Java’s new boss is the same as the old boss. Oracle’s reign is unlikely to transform Java. Oracle’s recent Java announcements were a disappointment. They are focused on more features, more performance and more partnerships with other vendors. So far, it appears that Oracle is continuing with Sun’s same failed Java policies.
  • Java has never been the only game in town. C# is not the alternative. It is little more than Java Microsoft style. But, there are new developer tools such as Microsoft Lightswitch and WaveMaker, and traditional but updated 4GL tools such as Compuware Uniface and Progress OpenEdge. And don’t forget about business rules platforms, BPM and event processing platforms that enable faster change offers by enterprise software vendors such as IBM, Progress, TIBCO and Software AG.
Yet, Gualtieri noted that, “Clear standard alternatives to Java and C# for custom-developed applications do not exist.” And he exhorted app dev teams to create a three-year application development strategy and road map to include architecture, process, talent, tools and technology. The road map should clearly look at language and framework options.
Later, in January 2011, Gualtieri’s Forrester colleague John Rymer did a post on the future of Java, where he quite correctly said:

Fewer young developers will learn Java first. One of Java's greatest strengths has been the number of young developers who learn it as a first language. As Java becomes less and less of a client-side language, we expect to see educational institutions switch to other languages for primary education, ones with stronger client-side representation such as JavaScript and HTML 5. Over time, developers will begin to view Java as a server-side language for enterprises—like COBOL.

Meanwhile, a RedMonk ranking of programming languages from last week shows Java as the top programming language according to their methods of calculating. In a Feb. 8 post, RedMonk’s O’Grady said:

As recently as a year ago, Java was widely regarded as a language with a limited future. Between the increased competition from dynamic languages and JVM-based Java alternatives, while the JVM had a clearly projectable future, even conservative, enterprise buyer oriented analysts—the constituency most predisposed to defend Java—were writing its obituary. As we argued at FOSDEM last February, however, these conclusions were premature according to our data. One year in, and the data continues to validate that assertion.
Apart from being the second-highest growth language on GitHub next to CoffeeScript, Java—already the language with the second-most associated tags on Stack Overflow—outpaced the median tag volume growth rate of 23 percent. This growth is supported elsewhere; on LinkedIn, the Java user group grew members faster than every other tracked programming language excepting C# and Java. This chart, for example, depicts the percentage of LinkedIn user group growth for Java- and JVM-based alternatives since November of 2011.


Still, there are obviously times when a development team needs to introduce a new language into their environment. Jay Fields, a software developer at DRW Trading, offers advice. Fields said introducing a new language is likely a multi-year affair for any moderately sized organization. He said he had to take on several roles and to become an expert and other things to alleviate his teammates’ concerns.
Said Fields:

“I eased my teammates’ adoption fears by making the following commitments.
  • If you want to work on the code I'll work with you (if you want me to work with you).
  • If you don't want to work on the code I'll fix anything that's broken.
  • If the initial pain of working with a new language becomes unbearable to you, I'll rewrite everything in Java on my own time.”