Welcome!

Web 2.0 Authors: Pat Romanski, Plutora Blog, Dana Gardner, Elizabeth White, Liz McMillan

Related Topics: Adobe Flex

Adobe Flex: Article

Agile Chronicles #2: Code Refactoring

If we could see changes ahead of time, there’d be no need for the Agile process in the first place

This entry is about the joy of coding quickly, finding the balance between getting something done quickly vs. architecting for the future, and dealing with the massive amount of re-factoring that’s entailed in iterative Scrum development.

Coding Quickly

I’m coding like I’m in Flash again. Instead of spending 3 weeks setting up Cairngorm or PureMVC with all your use cases, agreeing on the framework implementation details with coworkers, and getting enough of a foundation together that you can actually compile the application and start seeing screens, you instead make a mad dash to get app working in just a day or less.

Rather than discussing with your team what the best ValueObject structure is and how your service layer should work, you instead get a login service working in under 40 minutes. If something changes massively, such as the data structure of the user object returned, you just modify or delete & rewrite the entire ValueObject. You didn’t spend a lot of time on it anyway, so it’s not like your “architecture masterpiece” is getting deleted; it’s just some scaffolding code to get you up and running.

Coding For the Future

…yet, it’s not scaffolding; it’s real code that needs to work, and work the entire project. Deciding how much to write well & encapsulated vs. just getting it done is extremely challenging, and fun. When do you git-r-done and when do you over-architect? How much and where? Hard questions to answer, fun times. Part foresight, part gambling, all calculated risk taking.

You know your service layer, the code that talks to the back-end probably WON’T change. It’s extremely unlikely that in the middle of your project, you’ll switch from .NET and XML to PHP and AMF. Therefore, spending more time architecting that portion can be done so with confidence in using the extra time it takes.

Anyone from a design agency should already find that very familiar. You have a series of impossible deadlines, and arrogant programmers (like me) exclaiming you must utilize OOP, design patterns, and frameworks. You’re challenged with meeting your deadline(s), trying to do right where you can, and learning throughout the process.

This is slightly different in Agile for product development (or even service development) in that once launched, your application doesn’t have a limited shelf life. It’s an actual product. Traditionally software is used 3 to 5 times its original intended lifespan, although, I’d argue with web software that is lessening. Even before launch, you’ll be extending certain areas, and expecting it to perform solidly. Deciding what to hack together for deadline’s sake, and what to invest well thought out architecture time in is really hard. REALLY hard. And fun!

UAT’s As Checkpoints

During sprint UAT’s (every other Friday for my team), or even just posting the latest working build for the team to see, you’ll inevitably question certain functionality and performance. ”Why is that screen so slow to load?”, “Getting to this screen is more tedious than it should be…”, or “My RAM and CPU usage are through the roof!”. The designer may see their designed creation in action, and totally change their mind on how it should look or work. The stakeholders, after using it, may realize that it totally doesn’t solve their original goal(s) like they thought it would. You may even notice a bunch of positive enhancements to make on already working sections.

This may sound frustrating, but it’s good for a bunch of reasons. First off, this is the main reason Waterfall fails as a process. None of these things can happen until the project is COMPLETE, in the Validation phase where you validate the software is on spec. A lot of you may already have had those things happen during a project; now imagine none happening until the entire product is complete. It’s a lot harder to change that much code that late in the game. You now have the opportunity to fix bad decisions, improve design implementations, and add enhancements… early! This is when they can have the most positive impact, reduce risk, and get battle tested more.

Secondly, when you go to fix something, you can code with more confidence since the functionality has at least been used. Programmers second-guess themselves all the time. They have to; early decisions made incorrectly can have disastrous consequences later (quoted from one of the Pragmatic Programmer authors in an interview). It’s really frustrating to be insecure about how a user story actually works. After getting it “working” in a reasonable timeframe, and using & discussing it, you can have confidence in what you code is more “correct”. Well… almost.

Third, your design gets more real. After banging on the implemented version of the design comps, your designer/UX person can make better decisions if their design is actually working, and the programmers can collaboratively discuss how to change/improve it. This assumes your designer/UX person hasn’t moved onto another project by this point; keeping them on retainer for at least 4 hours a week is helpful for the project.

Fourth, you get confirmation certain problems are in fact real problems. You may think something is slow, but if no one notices but you, does it really matter? Naturally, your ego as a programmer is inclined to fix it anyway, but remember, your goal is get things done, not fix something that isn’t broken. Same goes for problems you know of and other people see; it is just an iron-clad check mark that something is in fact a problem and needs to be addressed. If you have performance problems for full-screen video on your Mac in Safari and Firefox, and so does your project manager in Windows in IE, Firefox, and Safari, then you can confidently infer that the majority of other people will too.

Granted, testing with more than 2 people is preferred, but the point here is that you get a helpful checkpoint with a 2nd set of eyes. Coding this quickly without too much care to architecture, juggling a lot of moving pieces is a lot to handle. Having a helpful team member confirm an issue early is better than finding it months later in QA, even if you knew about it and forgot. Bottom line, using a UAT as an early checkpoint for completed user stories ensures they truly are complete and good and points out problems or potential enhancements early.

Refactoring


The above leads to refactoring; re-writing or modifying existing code. A lot of times refactoring is a pipe dream. Usually you’re so focused on getting things done, having time to make something work better or faster, even the possibility, is the carrot that can keep you going.

Not in Agile. Based on the past 5 weeks and talking to Darrell (my project manager at Enablus), you re-factor on average 30% of your code per Sprint. You’re coding so fast and so furiously, that not everything is encapsulated as much as it could be (except for my service layer, it’s tight baby!). Not only that, but as you see the software in action, you can then start making valid changes. Maybe the functionality didn’t work as good as you originally thought it would, or perhaps you suddenly realize, now that you see it, that it needs something added.

While this is easy from a user story perspective, just modifying an existing user story or adding a new one, it may not be so straightforward in code. A lot of times, there was no way you could foresee the change you are now tasked at making.

If we COULD see those changes ahead of time, there’d be no need for the Agile process in the first place.

This means that some of your code needs to be majorely re-worked, or even just thrown away and done from scratch.

While you’re technically working on a user story, you’re potentially breaking another. It’s not necessarily spaghetti code, but it’s certainly not Orgathoganal by the Pragmatic Programmer’s definition… unless you’ve architected that section out already, you’re a bad ass, or lucky. I’d argue the 30% is a loose average. In the first sprint, I didn’t re-factor anything, nor the 2nd week into Sprint #2. In the 2nd and 3rd sprint, I was re-factoring up to 40%. In Sprint #4, it’ll definitely be at least 40% again. The 40% arose from taking 3 tries to get a piece of functionality the designer wanted correct. The 40% next Sprint accounts for my bitmap caching engine suddenly needing to save not just 1, but types of ValueObjects, and all the existing View’s that now need to support both.

Not to mention the fact we were working with the server-side team for the first time and still figuring things out. The percentages are not indicative of the entire code-base, but rather, my time spent the entire sprint (2 weeks). All this while working on new user stories…

For example, while you originally stated a user story would only be a “2 - mostly easy”, it ended up taking you a total of 5 days to complete because you were re-factoring and fixing other existing user stories that it related to. This can lead to the perception that your original point estimations are inaccurate when in reality, your point estimations are accurate, it’s just there is no adjustments made for re-factoring. This isn’t always necessarily taken into consideration when forming a point average for what your team can complete each sprint. Some sprints, you hit your “20 average” and another, you only hit 15, but you could have possibility re-factored 7 points worth of existing user stories, thus skewing the results.

Refactoring really confirms how much you wish you could predict the future. As I’ve stated before, sometimes it’s easier to just start from scratch again on a certain component now that you know better how it’s supposed to work. The original piece of code may have been really small and not well thought out in the first place for the sake of time. That’s totally fine, as the mere fact you’re deleting it and starting from scratch atests to it being a good decision at the time. Other times, however, you’ll notice you’ll have to do some major changes to a bunch of different classes, of which because not everything is encapsulated, you may suddenly feel like it’s spaghetti code; changing one thing breaks another, totally unrelated piece.

I will say with ActionScript 3, strong-typing and runtime exceptions have really helped me refactor A LOT faster than in the past. I can “break with confidence”, even if I know my code is crap (it isn’t, I’m just going for dramatic effect here… *ahem*). This has really helped remove the “fear” factor you can get with touching code. It’s one thing to have your code build trust with you. You really thought about its architecture, beat on it some, and it held up. Cool, your code has built some trust with you. In coding quickly in Scrum, however, how much trust do you really have when only parts are uber-solid? Knowing that your code is going into a real world product people are paying for doesn’t lessen the pressure and stress.

Again, AS3 has really helped me here. If there is a problem, I’m more likely to find it now, and find it quickly. Additionally, KNOWING that fact allows me to, again, code with more confidence, try more ideas, and end up with better code. Now, you might think you should start coding for every eventuality, at least from assuming errors such as checking for null and isNaN like crazy, but quite the opposite. A lot of the runtime errors can point out problems pretty quickly, and the catch here is they point them out in both quickly written code AND well architected code. The point here is that even well architected code will have problems you don’t forsee. What I end up doing is using my best guess at the time, using foresight based on our past UAT and other project detail discussions, and moving on with life. Stressing too much about one section is a waste of time; if it works, rad, move forward. You may rewrite it again later anyway…

What Doesn’t Change and What Does

Experience has really taught me what to code quickly, what to architect well, and all the in betweens. I haven’t got it all figured out yet, but I DO know of some sections that usually never change, and ones that change all the time.

The ones that never change are the service layer. This is your Business Delegates in Cairngorm, or your Remote Proxies in PureMVC (or if you’re like me, your Business Delegates that your PureMVC proxies call). If they DO change, it’s because the server-side developer changed the the name of the service, or the location. Whoop-pu-dee-doo… 1 line of code in either the class or your ServiceLocator. If you’re delegates/proxies use Factories to actually parse the server’s returned data (XML, JSON, AMF, etc.) then you’re even more insulated. Again, middle tier technology doesn’t really change in the middle of a project.

A data model change usually affects your entire application. For example, if you change the data structure of a Person object (PersonVO), suddenly your Factory changes, your VO’s change, any Controller classes modifying PersonVO’s (such as Commands in Cairngorm or Proxies in PureMVC, and potentially Commands as well), and any View’s that represent or edit them.

If you’re creating complicated View’s, whether based on a design comp with little detail, or it’s not a conventional GUI control, it will definitely change over time once someone uses it and gives feedback. Any View based on a list of dynamic data that needs to draw a bunch of children that represent a ValueObject, such as a repeater or a custom Chart will go through extreme refactoring; both modifications of item renderers and drawing performance improvements if you don’t extend List and do your own drawing routines.

View’s such as your main Application file, an optional MainView, a Login, and Menu’s do not change assuming you use 1 CSS file and straight forward skinning. Most Event and Utility classes just get added to; you don’t really change them, rather you add or remove class properties and/or methods, but their names and package structure stay the same.

For Cairngorm Commands, they just grow in scope as the development age of your application increases. Since PureMVC Commands delegate a lot of this Model modification off to Proxies, those Proxies tend to grow in scope as the complexity of your data interactions increase. They only get waxed or massively changed if your data model does. This doesn’t really happen later in the project.

The above is totally a case by case basis, but has been consistent on a lot of my projects. Your mileage may, and most likely will, vary.

The Con to Refactoring

There are a few cons to soo much refactoring. The first is, some clients don’t understand why you’re coding the same thing twice… or more, especially when Scrum is supposed to be about getting it done quickly vs. over-thinking it. In my experience, if you can speak intelligently at a high level, you can explain each refactoring part away. I can’t, so usually explain it to a project manager who’s capable of translating it in lamens terms to a client.

The second thing is that it makes merging on Merge Day a TON harder. You may have already refactored like twice the week before, and totally forgotten all the details of why you did. Suddenly, 4 days later (every other Wednesday in our case), you’re having an insanely hard time merging code from your branch(es) into trunk. This may require a long conversation with your team members, and you are struggling to remember why you made such massive code changes.

Even if you do remember, the other developer may feel a little frustrated if you didn’t invite them into the code refactoring change discussion for something you may at the time have felt was trivial. It probably was trivial, it’s just blown out of proportion now since merging is always stressful. Either that, or you just spend a few hours getting trunk working again. If I totally wax something, I’ll usually put a large drawn out code comment to explain why. Additionally, I’ll do the same thing in SVN check in comments.

The third thing is it’s a project manager’s nightmare. If she doesn’t have enough forewarning of these and their possible affect on not getting a single or set of user stories done by the end of the sprint, it can be a bad surprise. Communicating these during the daily standup meeting with potential ramifications is best. It can also make planning future sprints challenging as well. If your team has been chugging along at an average 12 points per sprint for 3 consecutive sprints, and suddenly in sprint #4 you spend 60% of your time refactoring, you’re clearly going to finish with a lot less points in user stories completed.

This sets the project manager up for failure. They cannot effectively communicate projected progress to the client, nor visibility into the current progress of the app since something that worked for awhile may suddenly break in the next UAT. You’re supposed to be completing user stories, not creating new ones that break old ones. Again, forewarning is the only thing I know immediately to do. I’m not sure what doing too much refactoring is a symptom of yet. Most so far on my current project, and past ones, have been for random reasons.

Conclusions

I really like how fast I can code some things in Agile. Other things have stayed the same, but the overriding goal of “get it working, but don’t write crap code” is such a high bar… and I love it. It’s the same speed of agency coding, only you know you’ll have to live with the code (aka potentially eating your own mess) so you end up producing better code than you would in an agency setting.

I also like either drawing on experience, or just making challenging inferences, on what to architect well, and on other parts to just get something working without too much thought. It’s nice to have the variety.

Finally, I’m not sure what to think of the refactoring. I like that it’s “ok” and an expected part of the process, but I feel that my project is unique in the amount I’m personally doing. My coworker for example isn’t doing as much as I’m doing at all; he’s chugging along on other user stories and is set to beat me, again, in point values for user stories completed at the end of this sprint. We’re really pushing the limits of Flash Player here, and only one section in this large app is really this challenging; the rest are your run of the mill Flex screens. So, it sounds to me like the “on average 30% of your time is spent refactoring per sprint” still applies. There is no way I’ll be refactoring this much on some of the easier sections in future sprints.

Stay tuned for #3 in the Agile Chronicles series where I talk about every developer using their own Branch in Subversion.

More Stories By Jesse Randall Warden

Jesse R. Warden, a member of the Editorial Board of Web Developer's & Designer's Journal, is a Flex, Flash and Flash Lite consultant for Universal Mind. A professional multimedia developer, he maintains a Website at jessewarden.com where he writes about technical topics that relate to Flash and Flex.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ...
Things are being built upon cloud foundations to transform organizations. This CEO Power Panel at 15th Cloud Expo, moderated by Roger Strukhoff, Cloud Expo and @ThingsExpo conference chair, addressed the big issues involving these technologies and, more important, the results they will achieve. Rodney Rogers, chairman and CEO of Virtustream; Brendan O'Brien, co-founder of Aria Systems, Bart Copeland, president and CEO of ActiveState Software; Jim Cowie, chief scientist at Dyn; Dave Wagstaff, VP and chief architect at BSQUARE Corporation; Seth Proctor, CTO of NuoDB, Inc.; and Andris Gailitis, C...
Today’s enterprise is being driven by disruptive competitive and human capital requirements to provide enterprise application access through not only desktops, but also mobile devices. To retrofit existing programs across all these devices using traditional programming methods is very costly and time consuming – often prohibitively so. In his session at @ThingsExpo, Jesse Shiah, CEO, President, and Co-Founder of AgilePoint Inc., discussed how you can create applications that run on all mobile devices as well as laptops and desktops using a visual drag-and-drop application – and eForms-buildi...
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have s...
SYS-CON Media announced that Splunk, a provider of the leading software platform for real-time Operational Intelligence, has launched an ad campaign on Big Data Journal. Splunk software and cloud services enable organizations to search, monitor, analyze and visualize machine-generated big data coming from websites, applications, servers, networks, sensors and mobile devices. The ads focus on delivering ROI - how improved uptime delivered $6M in annual ROI, improving customer operations by mining large volumes of unstructured data, and how data tracking delivers uptime when it matters most.
The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
The true value of the Internet of Things (IoT) lies not just in the data, but through the services that protect the data, perform the analysis and present findings in a usable way. With many IoT elements rooted in traditional IT components, Big Data and IoT isn’t just a play for enterprise. In fact, the IoT presents SMBs with the prospect of launching entirely new activities and exploring innovative areas. CompTIA research identifies several areas where IoT is expected to have the greatest impact.
There is no doubt that Big Data is here and getting bigger every day. Building a Big Data infrastructure today is no easy task. There are an enormous number of choices for database engines and technologies. To make things even more challenging, requirements are getting more sophisticated, and the standard paradigm of supporting historical analytics queries is often just one facet of what is needed. As Big Data growth continues, organizations are demanding real-time access to data, allowing immediate and actionable interpretation of events as they happen. Another aspect concerns how to deliver ...
The Internet of Things will greatly expand the opportunities for data collection and new business models driven off of that data. In her session at @ThingsExpo, Esmeralda Swartz, CMO of MetraTech, discussed how for this to be effective you not only need to have infrastructure and operational models capable of utilizing this new phenomenon, but increasingly service providers will need to convince a skeptical public to participate. Get ready to show them the money!
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
Code Halos - aka "digital fingerprints" - are the key organizing principle to understand a) how dumb things become smart and b) how to monetize this dynamic. In his session at @ThingsExpo, Robert Brown, AVP, Center for the Future of Work at Cognizant Technology Solutions, outlined research, analysis and recommendations from his recently published book on this phenomena on the way leading edge organizations like GE and Disney are unlocking the Internet of Things opportunity and what steps your organization should be taking to position itself for the next platform of digital competition.
In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect at GE, and Ibrahim Gokcen, who leads GE's advanced IoT analytics, focused on the Internet of Things / Industrial Internet and how to make it operational for business end-users. Learn about the challenges posed by machine and sensor data and how to marry it with enterprise data. They also discussed the tips and tricks to provide the Industrial Internet as an end-user consumable service using Big Data Analytics and Industrial Cloud.
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it's a mix of architectural styles ...
IoT is still a vague buzzword for many people. In his session at @ThingsExpo, Mike Kavis, Vice President & Principal Cloud Architect at Cloud Technology Partners, discussed the business value of IoT that goes far beyond the general public's perception that IoT is all about wearables and home consumer services. He also discussed how IoT is perceived by investors and how venture capitalist access this space. Other topics discussed were barriers to success, what is new, what is old, and what the future may hold. Mike Kavis is Vice President & Principal Cloud Architect at Cloud Technology Pa...
Dale Kim is the Director of Industry Solutions at MapR. His background includes a variety of technical and management roles at information technology companies. While his experience includes work with relational databases, much of his career pertains to non-relational data in the areas of search, content management, and NoSQL, and includes senior roles in technical marketing, sales engineering, and support engineering. Dale holds an MBA from Santa Clara University, and a BA in Computer Science from the University of California, Berkeley.
The Internet of Things (IoT) is rapidly in the process of breaking from its heretofore relatively obscure enterprise applications (such as plant floor control and supply chain management) and going mainstream into the consumer space. More and more creative folks are interconnecting everyday products such as household items, mobile devices, appliances and cars, and unleashing new and imaginative scenarios. We are seeing a lot of excitement around applications in home automation, personal fitness, and in-car entertainment and this excitement will bleed into other areas. On the commercial side, m...
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity.
"People are a lot more knowledgeable about APIs now. There are two types of people who work with APIs - IT people who want to use APIs for something internal and the product managers who want to do something outside APIs for people to connect to them," explained Roberto Medrano, Executive Vice President at SOA Software, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Performance is the intersection of power, agility, control, and choice. If you value performance, and more specifically consistent performance, you need to look beyond simple virtualized compute. Many factors need to be considered to create a truly performant environment. In his General Session at 15th Cloud Expo, Harold Hannon, Sr. Software Architect at SoftLayer, discussed how to take advantage of a multitude of compute options and platform features to make cloud the cornerstone of your online presence.