Welcome!

Agile Computing Authors: LeanTaaS Blog, Yeshim Deniz, Elizabeth White, Larry Alton, Pat Romanski

Related Topics: Java IoT, Industrial IoT, Microservices Expo

Java IoT: Article

i-Technology Viewpoint: Laziness Sometimes Pays

The Gains Made by Better Algorithms Almost Always Outstrip the Gains From Better Hardware

Let me begin by a philosophical rant. There is a motto from scientific computing that carries to many areas of computer science:

/The gains made by better algorithms almost always outstrip the gains from better hardware./

I've frequently seen where algorithm improvements pay by factors of tens to tens of thousands in CPU time. One change I made in a numerical algorithm improved CPU requirements by a factor of 50,000: from weeks on a super-computer to minutes on a workstation.

Any business-savvy engineer knows that algorithm improvements come at a price: the engineer's time. Striking that balance makes software systems move forward rather than staggering to a halt in bloat and dysfunction. It also helps to use people who actually know what they are doing: knowing how to compile code doesn't make you a software engineer any more than knowing how to spell makes you a writer. End of rant.

On to (rant related) business. On most Web sites, think of how many times a data source will be used to retrieve the same data and produce the same content over and over again. Most successful services deliver a highly redundant amount of information to their users. For example, the JDJ website will deliver this (same) content to perhaps a hundred thousand users. If the servers are overtaxed, customers will experience significant delays or malfunctions.

There are several useful solutions to this. Well configured caching proxy servers come to mind, although server-side scripting make this difficult. Buying more hardware will eventually fix the problem, which may be the correct business solution.

But what about asking programmers to be a little more lazy?

For this article I've included the source for the LazyFileOutputStream. It acts just like a regular FileOutputStream except that, if created on a file that already exists, it /reads/ the data from the file instead of writes it. The stream compares what is already in the file with what you are currently writing to it. If at any point it sees there is a difference in the data you are writing this time compared to what is already there, the stream automatically switches to a write-mode that writes over the remainder of the file with the changes.

The upshot is, if your program generates the same output twice, the output file is unmodified the second time (leaving the original modification date). First, by simply changing FileOutputStream to LazyFileOutputStream, any downstream processing can use timestamp information on the files to check if they need to do anything at all. If the timestamp hasn't changed, then neither has the contents.

But wait, there's more! In addition to the standard close(), the LazyFileOutputStream also supports abort(). This method effectively states "I'm done now, leave the rest of the file alone." The remainder of the file will be the same, even without reproducing it. This means that, if you determine at an early point in the processing of the file that it's going to turn out the same, you can simply abort() to leave it alone. Its similar to the idea of not changing the modification dates on files which are rewritten with the same data, but allows for saving CPU time for the current process step as well as downstream processing..

Certain engines produce part of the template before you can conveniently intervene to decide if you really need to regenerate it. By opening up the output as a Lazy file, you can just abort() early and have the old version, with the the old modification time, around for downstream processing.

Okay, rant concluded and point made: CPUs around the planet are spinning through the same data tens of thousands of times producing the same content tens of thousands of times. Instead of buying great big servers to manage this, a smart caching policy based on lazy file writers and some modification time testing could save some sites that same wild-sounding factor of 50,000. Without having to buy 50,000 new servers.

Anecdote # 1. There is a certain technical advantage to this style of writing data as well: most storage devices are easier to read from than write to, adhering to the 80/20 rule: 80% of file access will be reads, 20% will be writes.. The LazyFileOutputStream takes advantage of that for the many files which are simply rewritten with the same content.

Anecdote # 2. There must be a few curled toes out there saying to themselves, "Why not LazyFileWriter?" There are good technical reasons for the OutputStream: the logic of the data written must be checked in its raw /byte/ format for the idea to work correctly, and you can always wrap this in an OutputStreamWriter, followed by a BufferedWriter, which is what I recommend.

Now I'm even done with the anecdotes. Have a nice day.

More Stories By Warren MacEvoy

Warren D. MacEvoy is Asst Professor of Comp Science in the department of Computer Science, Mathematics & Statistics at Mesa State College, Grand Junction, Colorado.

Comments (8) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
Bruce VanOrder 10/19/04 09:25:42 AM EDT

I remember the first PC I bought for myself ... A CompuAdd 286 with lots of memory - 2 MB RAM and a whopping 40Mb hard drive ! On this gargantuan drive I was able to put everytrhing I needed.... WordPerfect 5.1, TurboPascal 5.5, Lotus 123, dBaseIII+, etc, etc, and a few games .... AND I STILL HAD ROOM !

Now I have a Pentium with 256Mb RAM & a 20 Gb hard drive...
MS Office Professional, Borland Delphi, JBuilder, Oracle, SQL Server.

dBaseIII+ could fit on a 1.44Mb 3.5" floppy !

Those were the days my friend, we thought would never end ..
:-)

Warren MacEvoy 10/16/04 12:57:28 AM EDT

Response to Mark M.

Back in the bad-old-days designers would kick around which sort would be better to use. Now practically all sort problems are best solved with Collections.sort() or using a TreeSet or TreeMap. This is a total win situation: it's faster to write, easier to maintain, and better optimized than any roll-your-own sort. So there's almost no context: the Collections' sort is almost always better.

I'll claim LazyFileOutputStream sits one step lower than this: it's almost never worse, and sometimes better than a plain FileOutputStream. If you are writing small chunks to an unbuffered stream (or flushing() after every character), then the adapter pattern it uses to implement its magic may cost you a little in time (but negligibly compared to other costs related to this approach). There is also a buffer overhead because of the (IMHO silly) decision to leave fundamental memory operations like POSIX memcmp out of the java system libraries. But you're writing to a file, and well, that's just kinda slow.

But what you gain is information. When you're done .isDifferent() will tell you if there was a change without having to keep the old copy around to see if there was a change, and the timestamps will tell you even if downstream processing occurs in some logically distant place, like another process.

So there's very little to lose in almost any situation, and a great deal to gain if:

1. your template processing is file-based.
2. you generally only rebuild things unless
they are out of date with respect to their dependencies.

Without timestamp information, implementing part 2 may have seemed like a waste of time (which it would have been, since every template rebuild would look like it was different), but switching to LazyFileOutputStreams can make it effective.

Warren MacEvoy 10/15/04 07:07:48 PM EDT

My apologies, but somehow the wrong link was placed for the source file. The correct adddress to the LazyFileOutputStream which the article refers to is:

http://bpp.sourceforge.net/download/bpp-0.8.5b/src/bpp/LazyFileOutputStr...

JavaDoc'ed at:
http://bpp.sourceforge.net/download/bpp-0.8.5b/doc/javadoc/index.html

You might also note that the class uses abandon() instead of abort(), which is a minor change.

This has nothing to do with

http://www.jdocs.com/ant/1.6.2/api/org/apache/tools/ant/util/LazyFileOut...

Again, my apologies for any confusion this may have created...

Mark M 10/15/04 11:47:56 AM EDT

Response to Warren M.

The key question is not how much more complicated it is to write the class. The key question is what is the context of the problem? Too often generic solutions to problems are presented (even if that is not always the author's intention, these things can be easily mis-interpreted as such) and their validity/necessity almost always depends upon the context of the problem. You yourself emphasize the need for context in your response to Jim M. You have created a useful tool for yourself given the context of the problem you were trying to solve. When the next programmer comes along, the context may be completely different. Often times, many are lead to believe incorrectly in one size fits all philosophies, for instance, it is widely viewed within the industry by working folk like myself that the notions of Bertand Meyer and Kent Beck conflict when in fact they both may be valid solutions under differing contexts. Lack of context is the biggest complaint I have with books on process in this industry. Without it, many arguments are neither valid nor invalid, just ambiguous. There is at least one really bright fella who says a lot about context when he writes. His name is Fred Brooks.

Warren MacEvoy 10/14/04 10:42:09 PM EDT

Response to Mark M.

I agree that it usually a waste of time to optimize without profiling to know where your problems are. You must also have a business argument that the problem needs to be solved and that optimizations are the best way to solve it.

It is wrong to think that optimizations must be complicated. There's plenty of code out there that make poor or no use of Collections, which would be faster to write, maintain and execute if better choices were made. Good programmers should know how to use these features to improve turnaround, defects, and efficiency (the rant part of my article).

The purpose of the article is to point another kind of "low hanging fruit" related to file processing. After all, how much more complicated is it to write "LazyFileOutputStream" compared to "FileOutputStream"?

Response to Jim M.

Completely? Substantially. Completely claims they have nothing to learn from each other, yet there are many business problems with a short lifetime and plenty of rustic scientific codes are dutifully solving the problems they were designed to solve twenty and thirty years after they were written.

Again, the optimizations I'm suggesting don't need to be complicated. The LazyFileOutputStream is as simple as the code it replaces. How does that detriment readability or maintainability?

As far as longevity, I like the analogy of building a wall. The last row of the wall (business or scientific) can be very slipshod and the wall will still be a wall. Much software is written with the (sometimes correct) assumption that they will be part of the last row of bricks. But people change their minds, and what was once the last row is not anymore. In the real world, this is why tens of thousands of people die when there is an earthquake in a third-world country.

Should businesses be happy with a software design model analogous to the slums of Mexico City?

Response to Justin S.

Edit one line of an XML configuration file, changing one attribute. Many elements of your design depend on this XML file, but almost none of them depend on this one attribute. Your solution suggests detailed code to see if the attributes each dependency requires has changed, which would be hugely complicated to write and maintain.

Mine asks that you to rebuild the elements that directly depend on the configuration file. If they don't change, then you don't have to propagate updates further. Not a perfect optimization, but a much more practical one.

The LazyFileOutputStream supports your idea if you choose to pursue it. If a template decides it does not need to regenerate a target, it can simply abort() to leave the current contents alone without going to the trouble of regenerating all of it.

Justin Sadowski 10/14/04 08:27:34 PM EDT

While I agree with your thoughts about the value of avoiding writing the same data over and over, I have to disagree with your LazyFileOutputStream solution. If you find that you are writing the same data to the same file repeatedly, I would suggest that you improve this by avoiding the rewriting altogether, instead of just making the rewriting more efficient.

For example, perhaps you are writing the same output repeatedly because you are operating on the same input; i.e. the data in a database hasn't changed, or a source XML file hasn't changed. If you can detect your input hasn't been modified, you can avoid writing the output altogether.

I would like to hear more details about the specific situation(s) that you have used LazyFileOutputStream -- I would be interested to hear an example of a situation where my logic above does not apply.

Jim T. 10/14/04 07:09:02 PM EDT

Scientific computing and business computing are completely different. In the scientific communality you usually have a very small number of highly skilled people working on a program. That just isn't so in the business world. In the business world I care much more about readability and maintainability rather than speed for 99% of our code. In science, nobody will be using my programs 5 years from now, the data will have all been analyzed and the papers published, in business, the exact same code will be used 5 years from now (or at least it will be for the basis of the code). I believe this is true because it is true of my code from 5 years ago. The physics code is gone/useless and the business code is being resold every day.

mark mcconkey 10/14/04 05:51:12 PM EDT

Several years back I began reading Kent Beck's stuff (XP) and it struck a chord for me because many of my experiences were similar. I believe Kent's general notion is something to the effect that one should not optimize up front because its too difficult to predict the future, and the majority of the time you will have made your code unreadable for no reason whatsoever. Of course, any seasoned programmer has experienced enough to have a feel for when big troubles are over the hill and thus that some optimization up front will be needed. I think though, that what is missing in your article is a lack of discussion of context. If I have 3 weeks to finish something that will take 6 and 50 big whigs in a fortune 500 company have goals dependent upon the completion of my software, it doesn't matter how clever I am. It matters how fast I can produce what is needed. On the other hand, the creators of Amazon probably needed to be quite clever in order to deal with the magnitude of hits on their servers. Without context, its sort of useless to talk about optimization.

@ThingsExpo Stories
To get the most out of their data, successful companies are not focusing on queries and data lakes, they are actively integrating analytics into their operations with a data-first application development approach. Real-time adjustments to improve revenues, reduce costs, or mitigate risk rely on applications that minimize latency on a variety of data sources. In his session at @BigDataExpo, Jack Norris, Senior Vice President, Data and Applications at MapR Technologies, reviewed best practices to ...
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, whic...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and B...
"Digital transformation - what we knew about it in the past has been redefined. Automation is going to play such a huge role in that because the culture, the technology, and the business operations are being shifted now," stated Brian Boeggeman, VP of Alliances & Partnerships at Ayehu, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone inn...
"Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The 22nd International Cloud Expo | 1st DXWorld Expo has announced that its Call for Papers is open. Cloud Expo | DXWorld Expo, to be held June 5-7, 2018, at the Javits Center in New York, NY, brings together Cloud Computing, Digital Transformation, Big Data, Internet of Things, DevOps, Machine Learning and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding busin...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, led attendees through the exciting evolution of the cloud. He looked at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering m...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, discussed some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he covered some of the best practices for structured team migration an...
No hype cycles or predictions of a gazillion things here. IoT is here. You get it. You know your business and have great ideas for a business transformation strategy. What comes next? Time to make it happen. In his session at @ThingsExpo, Jay Mason, an Associate Partner of Analytics, IoT & Cybersecurity at M&S Consulting, presented a step-by-step plan to develop your technology implementation strategy. He also discussed the evaluation of communication standards and IoT messaging protocols, data...
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, discussed how they built...
SYS-CON Events announced today that Evatronix will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Evatronix SA offers comprehensive solutions in the design and implementation of electronic systems, in CAD / CAM deployment, and also is a designer and manufacturer of advanced 3D scanners for professional applications.
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
DevOps at Cloud Expo – being held June 5-7, 2018, at the Javits Center in New York, NY – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits,...
@DevOpsSummit at Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, is co-located with 22nd Cloud Expo | 1st DXWorld Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait...
SYS-CON Events announced today that T-Mobile exhibited at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on qua...