Welcome!

Web 2.0 Authors: Yeshim Deniz, Pat Romanski, Carmen Gonzalez, Jason Bloomberg, Greg Ness

Related Topics: Java

Java: Blog Feed Post

JavaOne 2009: Open Source Project Stonehenge

Interoperability is more than just talking with each other

Microsoft and Sun recently announced their Open Source Project Stonehenge at the JavaOne conference. Stonehenge is a reference implementation that shows how to bridge the two major development platforms Java and .NET using Web Services. This initiative definitely puts the spotlight on heterogeneity and the challenges that come with it.
Interoperability on the platform level is just the starting point of bridging the two worlds. It leads to further challenges down the road and several questions that come with it:

  • Who needs interoperability?
  • How does it affect team productivity?
  • Is it all about application stacks?
  • How effective can we diagnose problems?
  • How to calculate TCO 1 + 1 = 2 or 3?

Who needs interoperability?
There are different use cases where companies need to think about interoperability

  • Integrating different systems implemented on different platforms, e.g.: ERP with CRM
  • Integrating 3rd party solutions that only run on a specific platform, e.g.: Enterprise Search Engines
  • Integrate components inherited from acquisitions

The driving factor of interoperability in all these cases is gained productivity. Instead of re-implementing an existing system in order to bring it on to the platform of choice it is more productive to integrate with the other platform.

Individual platforms also have their individual strengths in different areas. Microsoft technologies for instance provide great flexibility and good tools to implement end-user applications whereas the Java platform has proven itself very strong in backend enterprise systems. Leveraging the best of both sides requires the integration of these two worlds.
Microsoft and Sun took the first step by providing a reference implementation that shows how to technically integrate .NET and Java by using Web Services. This is a first important step – but there is more than the technical integration that we need in order to successfully make interoperability happen.

How does it affect team productivity?
How often have you seen a .NET developer that debugs Java code in Eclipse on Linux? Or how often have you seen a Java developer in front of Visual Studio browsing through C# or VB.NET code?
Cross platform developers are a rare “species”. A typical cross platform development team therefore has developers specialized in either Java or .NET. An individual developer most often sees the other platform as a Black Box and as something that should be avoided if possible. Web Services allow calling from .NET to Java and the other way around. Debugging is easy on each side individually but it becomes a big obstacle having to debug transactions that cross platform boundaries. It either requires the developer to be both acquainted with Visual Studio and Eclipse as well as being familiar with both the Java and .NET code – which rarely is the case.
In a typical heterogeneous team it therefore always requires developers from both sides to analyze transaction flows. This is a tedious manual task by setting the correct breakpoints on each side and in each IDE, then stepping through the code. Debugging through code also only works in a single user environment as it is hardly possible to identify the correct thread on the server side implementation of a Web Service that belongs to the calling client side.

If the team collaboration works well – cross platform problem analysis is a doable task – but as outlined above it requires at least one resource from each side. Far too often – these team members don’t communicate well and simply play the “Blame Game” when coming across an issue by simply blaming the problem to be in the implementation on the other platform. This approach of resolving the problem negatively affects team productivity by introducing extra resolution cycles and it also increases tension between teams and team members.
These cross team issues are similar than thse we have seen between development and testing teams – two teams that work in different domains not having the insight into the others problem domain. This similar problem has been solved by providing testers with diagnostics tools that collect more meaningful information during their tests which help developers to quickly identify the root cause of problems. Getting this type of information not only took out the tension but also fastened the overall development cycle.

The logical conclusion therefore is to equip all teams in a heterogeneous team with tools that can collect and visualize the right set of data to speed up problem resolution, take out the tension and improve the overall team productivity.

Is it all about application stacks?
Integrating the different platforms from an implementation perspective is obviously the mandatory step to allow cross platform communication. This goal has been achieved with Web Services and the correct implementation of Web Service Standards by the different application stack providers.
Development tools like Visual Studio and Eclipse make it easy to create the application code (proxy classes) necessary to call from Java to .NET and vice versa. As long as everything runs fine during runtime developers on both sides can focus on their implementation without needing to worry about what is going on in the other cross platform teams.
In case problems come up, e.g.: calling a .NET Web Service from Java that returns a weird error its not possible for the Java Developer to go beyond the error message received in the Eclipse debugger. Tools on each side are very good in debugging, diagnosing and profiling problems on the respective platform. Cross Platform Support is however missing right now – preventing the Java Developer to follow the problem to the .NET side.

Why do we need cross platform tools?
Coming back to the example from above: When calling a .NET Web Service that throws an error or that executes slow can have multiple reasons. It could be a bug in the .NET Web Service implementation. It could also be a configuration issue in one of the used SOAP Application Stacks causing interoperability issues or it might be problematic input parameters from the Java side that causes unexpected or slow behaviours on the other side. One approach to analyze the problem is analyzing log files from both sides. The problem here is that there is no common log format and that there is no transactional context available that would allow transactional tracing and correlation of log entries.
In order to analyze cross platform problems we therefore need tools that support all involved platforms. Having this ability enables developers on both sides to understand the dynamics of the whole system better and fastens up problem resolution.

How effectively can we diagnose problems?
The first thing in problem diagnosis is to answer the question whether there is a problem or not. Problems can manifest in different ways

  • Bad application performance to the end user
  • Errors in log entries of individual components
  • Resource issues in infrastructure impacting system components

When we know that we have a problem we need to figure out where the problem is. Looking at log files that indicate a problem is almost a best case scenario as it at least gives an indication where the problem surfaced the first time. Problems perceived by end users, e.g.: bad application performance or error pages are harder to track. Where was the time spent? Which component threw the error that made it to the user interface?
Getting alerts by monitoring individual system components can tell us that we run high on CPU on certain servers or that we consume too much network bandwidth. But which component is responsible for the exhaustive use of resources? Is it a bug in a component that runs on these servers or is it a calling service that makes too many calls to a certain component?

Existing Monitoring and Diagnostics solutions focus on a particular environment or single server instances. Application Servers usually come with their own diagnostics support and additionally export performance counters that can be picked up by Enterprise Monitoring Solutions that enable monitoring of the complete infrastructure. These tools are great to analyze general problems in the infrastructure or to analyze standalone problems within a server. All existing tools however lack in analyzing cross platform issues. There are tools that analyze log files from all different components and correlate events in different logfiles to identify individual transactions. This is the right way to go but it relies on having the log information that can be correlated.

Too often problem diagnosis in heterogeneous environments comes back to being done manually. Collecting all available log information and performance counters. As any manual task it’s not a task that is very effective and does not always lead to a successful problem diagnosis. In order to diagnose problems we require a common way of capturing information from all platforms that participate when executing a transaction. In case a transaction has a problem – all this information must be extractable and easy accessible to developers to analyze the problem.

How to calculate TCO 1 + 1 = 2 or 3?
The tool landscape for Java and .NET is a huge one. There are many specialized tools that help improve productivity by supporting all stakeholders involved in running an application.
When working in cross platform environments it’s necessary to ensure good tool support for each platform. Most of the tools on the market are very specialized on a single platform leading to the need of multiple tools in order to get the best support for each platform. More tools means more costs – especially when we want to ensure productivity.
Total Cost of Ownership for heterogeneous environments however is not just defined by the individual costs of the tools it requires. Additionally to buying individual tools there is extra cost involved to integrate them. Getting information from each of the tools is good – but the information is only really valuable when the information can be integrated in a similar way as our applications are integrated.
The lack of standards makes it very hard to actually integrate these tools to get the value out of it that each individual gives on a single platform.
Tools that support all platforms and that are able to provide the data collected on each platform in an integrated way will save costs that would otherwise be necessary to integrate individual island solutions.

Related posts:

  1. Web Service Interoperabilty Issues I’ve been working on building a .NET Client Application to...
  2. Performance Analysis: Identify GC bottlenecks in distributed heterogeneous environments William Louth made a good reference to one of his...
  3. Getting ready for TechReady8: Load- and Web-Testing with VSTS and dynaTrace I’ve been invited by Microsoft to show dynaTrace’s integration into...

 

More Stories By Andreas Grabner

Andreas Grabner has more than a decade of experience as an architect and developer in the Java and .NET space. In his current role, Andi works as a Technology Strategist for Compuware and leads the Compuware APM Center of Excellence team. In his role he influences the Compuware APM product strategy and works closely with customers in implementing performance management solutions across the entire application lifecycle. He is a frequent speaker at technology conferences on performance and architecture-related topics, and regularly authors articles offering business and technology advice for Compuware’s About:Performance blog.

@ThingsExpo Stories
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com), moderated by Ashar Baig, Research Director, Cloud, at Gigaom Research, Nate Gordon, Director of T...
The definition of IoT is not new, in fact it’s been around for over a decade. What has changed is the public's awareness that the technology we use on a daily basis has caught up on the vision of an always on, always connected world. If you look into the details of what comprises the IoT, you’ll see that it includes everything from cloud computing, Big Data analytics, “Things,” Web communication, applications, network, storage, etc. It is essentially including everything connected online from hardware to software, or as we like to say, it’s an Internet of many different things. The difference ...

ARMONK, N.Y., Nov. 20, 2014 /PRNewswire/ --  IBM (NYSE: IBM) today announced that it is bringing a greater level of control, security and flexibility to cloud-based application development and delivery with a single-tenant version of Bluemix, IBM's platform-as-a-service. The new platform enables developers to build ap...

Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. He is co-author of t...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, described how to revolutioniz...
Technology is enabling a new approach to collecting and using data. This approach, commonly referred to as the "Internet of Things" (IoT), enables businesses to use real-time data from all sorts of things including machines, devices and sensors to make better decisions, improve customer service, and lower the risk in the creation of new revenue opportunities. In his General Session at Internet of @ThingsExpo, Dave Wagstaff, Vice President and Chief Architect at BSQUARE Corporation, discuss the real benefits to focus on, how to understand the requirements of a successful solution, the flow of ...
The security devil is always in the details of the attack: the ones you've endured, the ones you prepare yourself to fend off, and the ones that, you fear, will catch you completely unaware and defenseless. The Internet of Things (IoT) is nothing if not an endless proliferation of details. It's the vision of a world in which continuous Internet connectivity and addressability is embedded into a growing range of human artifacts, into the natural world, and even into our smartphones, appliances, and physical persons. In the IoT vision, every new "thing" - sensor, actuator, data source, data con...
"BSQUARE is in the business of selling software solutions for smart connected devices. It's obvious that IoT has moved from being a technology to being a fundamental part of business, and in the last 18 months people have said let's figure out how to do it and let's put some focus on it, " explained Dave Wagstaff, VP & Chief Architect, at BSQUARE Corporation, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Focused on this fast-growing market’s needs, Vitesse Semiconductor Corporation (Nasdaq: VTSS), a leading provider of IC solutions to advance "Ethernet Everywhere" in Carrier, Enterprise and Internet of Things (IoT) networks, introduced its IStaX™ software (VSC6815SDK), a robust protocol stack to simplify deployment and management of Industrial-IoT network applications such as Industrial Ethernet switching, surveillance, video distribution, LCD signage, intelligent sensors, and metering equipment. Leveraging technologies proven in the Carrier and Enterprise markets, IStaX is designed to work ac...
"There is a natural synchronization between the business models, the IoT is there to support ,” explained Brendan O'Brien, Co-founder and Chief Architect of Aria Systems, in this SYS-CON.tv interview at the 15th International Cloud Expo®, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
C-Labs LLC, a leading provider of remote and mobile access for the Internet of Things (IoT), announced the appointment of John Traynor to the position of chief operating officer. Previously a strategic advisor to the firm, Mr. Traynor will now oversee sales, marketing, finance, and operations. Mr. Traynor is based out of the C-Labs office in Redmond, Washington. He reports to Chris Muench, Chief Executive Officer. Mr. Traynor brings valuable business leadership and technology industry expertise to C-Labs. With over 30 years' experience in the high-tech sector, John Traynor has held numerous...
Bit6 today issued a challenge to the technology community implementing Web Real Time Communication (WebRTC). To leap beyond WebRTC’s significant limitations and fully leverage its underlying value to accelerate innovation, application developers need to consider the entire communications ecosystem.
The 3rd International @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades.
The Internet of Things is not new. Historically, smart businesses have used its basic concept of leveraging data to drive better decision making and have capitalized on those insights to realize additional revenue opportunities. So, what has changed to make the Internet of Things one of the hottest topics in tech? In his session at @ThingsExpo, Chris Gray, Director, Embedded and Internet of Things, discussed the underlying factors that are driving the economics of intelligent systems. Discover how hardware commoditization, the ubiquitous nature of connectivity, and the emergence of Big Data a...
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity.
SYS-CON Events announced today that Windstream, a leading provider of advanced network and cloud communications, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Windstream (Nasdaq: WIN), a FORTUNE 500 and S&P 500 company, is a leading provider of advanced network communications, including cloud computing and managed services, to businesses nationwide. The company also offers broadband, phone and digital TV services to consumers primarily in rural areas.
SYS-CON Events announced today that IDenticard will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. IDenticard™ is the security division of Brady Corp (NYSE: BRC), a $1.5 billion manufacturer of identification products. We have small-company values with the strength and stability of a major corporation. IDenticard offers local sales, support and service to our customers across the United States and Canada. Our partner network encompasses some 300 of the world's leading systems integrators and security s...
IoT is still a vague buzzword for many people. In his session at @ThingsExpo, Mike Kavis, Vice President & Principal Cloud Architect at Cloud Technology Partners, discussed the business value of IoT that goes far beyond the general public's perception that IoT is all about wearables and home consumer services. He also discussed how IoT is perceived by investors and how venture capitalist access this space. Other topics discussed were barriers to success, what is new, what is old, and what the future may hold. Mike Kavis is Vice President & Principal Cloud Architect at Cloud Technology Pa...
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world. The next @ThingsExpo will take place November 4-6, 2014, at the Santa Clara Convention Center, in Santa Clara, California. Since its launch in 2008, Cloud Expo TV commercials have been aired and CNBC, Fox News Network, and Bloomberg TV. Please enjoy our 2014 commercial.