Agile Computing Authors: Pat Romanski, Elizabeth White, Kevin Benedict, Peter Silva, Carmen Gonzalez

Related Topics: Adobe Flex

Adobe Flex: Article

Agile Chronicles #2: Code Refactoring

If we could see changes ahead of time, there’d be no need for the Agile process in the first place

This entry is about the joy of coding quickly, finding the balance between getting something done quickly vs. architecting for the future, and dealing with the massive amount of re-factoring that’s entailed in iterative Scrum development.

Coding Quickly

I’m coding like I’m in Flash again. Instead of spending 3 weeks setting up Cairngorm or PureMVC with all your use cases, agreeing on the framework implementation details with coworkers, and getting enough of a foundation together that you can actually compile the application and start seeing screens, you instead make a mad dash to get app working in just a day or less.

Rather than discussing with your team what the best ValueObject structure is and how your service layer should work, you instead get a login service working in under 40 minutes. If something changes massively, such as the data structure of the user object returned, you just modify or delete & rewrite the entire ValueObject. You didn’t spend a lot of time on it anyway, so it’s not like your “architecture masterpiece” is getting deleted; it’s just some scaffolding code to get you up and running.

Coding For the Future

…yet, it’s not scaffolding; it’s real code that needs to work, and work the entire project. Deciding how much to write well & encapsulated vs. just getting it done is extremely challenging, and fun. When do you git-r-done and when do you over-architect? How much and where? Hard questions to answer, fun times. Part foresight, part gambling, all calculated risk taking.

You know your service layer, the code that talks to the back-end probably WON’T change. It’s extremely unlikely that in the middle of your project, you’ll switch from .NET and XML to PHP and AMF. Therefore, spending more time architecting that portion can be done so with confidence in using the extra time it takes.

Anyone from a design agency should already find that very familiar. You have a series of impossible deadlines, and arrogant programmers (like me) exclaiming you must utilize OOP, design patterns, and frameworks. You’re challenged with meeting your deadline(s), trying to do right where you can, and learning throughout the process.

This is slightly different in Agile for product development (or even service development) in that once launched, your application doesn’t have a limited shelf life. It’s an actual product. Traditionally software is used 3 to 5 times its original intended lifespan, although, I’d argue with web software that is lessening. Even before launch, you’ll be extending certain areas, and expecting it to perform solidly. Deciding what to hack together for deadline’s sake, and what to invest well thought out architecture time in is really hard. REALLY hard. And fun!

UAT’s As Checkpoints

During sprint UAT’s (every other Friday for my team), or even just posting the latest working build for the team to see, you’ll inevitably question certain functionality and performance. ”Why is that screen so slow to load?”, “Getting to this screen is more tedious than it should be…”, or “My RAM and CPU usage are through the roof!”. The designer may see their designed creation in action, and totally change their mind on how it should look or work. The stakeholders, after using it, may realize that it totally doesn’t solve their original goal(s) like they thought it would. You may even notice a bunch of positive enhancements to make on already working sections.

This may sound frustrating, but it’s good for a bunch of reasons. First off, this is the main reason Waterfall fails as a process. None of these things can happen until the project is COMPLETE, in the Validation phase where you validate the software is on spec. A lot of you may already have had those things happen during a project; now imagine none happening until the entire product is complete. It’s a lot harder to change that much code that late in the game. You now have the opportunity to fix bad decisions, improve design implementations, and add enhancements… early! This is when they can have the most positive impact, reduce risk, and get battle tested more.

Secondly, when you go to fix something, you can code with more confidence since the functionality has at least been used. Programmers second-guess themselves all the time. They have to; early decisions made incorrectly can have disastrous consequences later (quoted from one of the Pragmatic Programmer authors in an interview). It’s really frustrating to be insecure about how a user story actually works. After getting it “working” in a reasonable timeframe, and using & discussing it, you can have confidence in what you code is more “correct”. Well… almost.

Third, your design gets more real. After banging on the implemented version of the design comps, your designer/UX person can make better decisions if their design is actually working, and the programmers can collaboratively discuss how to change/improve it. This assumes your designer/UX person hasn’t moved onto another project by this point; keeping them on retainer for at least 4 hours a week is helpful for the project.

Fourth, you get confirmation certain problems are in fact real problems. You may think something is slow, but if no one notices but you, does it really matter? Naturally, your ego as a programmer is inclined to fix it anyway, but remember, your goal is get things done, not fix something that isn’t broken. Same goes for problems you know of and other people see; it is just an iron-clad check mark that something is in fact a problem and needs to be addressed. If you have performance problems for full-screen video on your Mac in Safari and Firefox, and so does your project manager in Windows in IE, Firefox, and Safari, then you can confidently infer that the majority of other people will too.

Granted, testing with more than 2 people is preferred, but the point here is that you get a helpful checkpoint with a 2nd set of eyes. Coding this quickly without too much care to architecture, juggling a lot of moving pieces is a lot to handle. Having a helpful team member confirm an issue early is better than finding it months later in QA, even if you knew about it and forgot. Bottom line, using a UAT as an early checkpoint for completed user stories ensures they truly are complete and good and points out problems or potential enhancements early.


The above leads to refactoring; re-writing or modifying existing code. A lot of times refactoring is a pipe dream. Usually you’re so focused on getting things done, having time to make something work better or faster, even the possibility, is the carrot that can keep you going.

Not in Agile. Based on the past 5 weeks and talking to Darrell (my project manager at Enablus), you re-factor on average 30% of your code per Sprint. You’re coding so fast and so furiously, that not everything is encapsulated as much as it could be (except for my service layer, it’s tight baby!). Not only that, but as you see the software in action, you can then start making valid changes. Maybe the functionality didn’t work as good as you originally thought it would, or perhaps you suddenly realize, now that you see it, that it needs something added.

While this is easy from a user story perspective, just modifying an existing user story or adding a new one, it may not be so straightforward in code. A lot of times, there was no way you could foresee the change you are now tasked at making.

If we COULD see those changes ahead of time, there’d be no need for the Agile process in the first place.

This means that some of your code needs to be majorely re-worked, or even just thrown away and done from scratch.

While you’re technically working on a user story, you’re potentially breaking another. It’s not necessarily spaghetti code, but it’s certainly not Orgathoganal by the Pragmatic Programmer’s definition… unless you’ve architected that section out already, you’re a bad ass, or lucky. I’d argue the 30% is a loose average. In the first sprint, I didn’t re-factor anything, nor the 2nd week into Sprint #2. In the 2nd and 3rd sprint, I was re-factoring up to 40%. In Sprint #4, it’ll definitely be at least 40% again. The 40% arose from taking 3 tries to get a piece of functionality the designer wanted correct. The 40% next Sprint accounts for my bitmap caching engine suddenly needing to save not just 1, but types of ValueObjects, and all the existing View’s that now need to support both.

Not to mention the fact we were working with the server-side team for the first time and still figuring things out. The percentages are not indicative of the entire code-base, but rather, my time spent the entire sprint (2 weeks). All this while working on new user stories…

For example, while you originally stated a user story would only be a “2 - mostly easy”, it ended up taking you a total of 5 days to complete because you were re-factoring and fixing other existing user stories that it related to. This can lead to the perception that your original point estimations are inaccurate when in reality, your point estimations are accurate, it’s just there is no adjustments made for re-factoring. This isn’t always necessarily taken into consideration when forming a point average for what your team can complete each sprint. Some sprints, you hit your “20 average” and another, you only hit 15, but you could have possibility re-factored 7 points worth of existing user stories, thus skewing the results.

Refactoring really confirms how much you wish you could predict the future. As I’ve stated before, sometimes it’s easier to just start from scratch again on a certain component now that you know better how it’s supposed to work. The original piece of code may have been really small and not well thought out in the first place for the sake of time. That’s totally fine, as the mere fact you’re deleting it and starting from scratch atests to it being a good decision at the time. Other times, however, you’ll notice you’ll have to do some major changes to a bunch of different classes, of which because not everything is encapsulated, you may suddenly feel like it’s spaghetti code; changing one thing breaks another, totally unrelated piece.

I will say with ActionScript 3, strong-typing and runtime exceptions have really helped me refactor A LOT faster than in the past. I can “break with confidence”, even if I know my code is crap (it isn’t, I’m just going for dramatic effect here… *ahem*). This has really helped remove the “fear” factor you can get with touching code. It’s one thing to have your code build trust with you. You really thought about its architecture, beat on it some, and it held up. Cool, your code has built some trust with you. In coding quickly in Scrum, however, how much trust do you really have when only parts are uber-solid? Knowing that your code is going into a real world product people are paying for doesn’t lessen the pressure and stress.

Again, AS3 has really helped me here. If there is a problem, I’m more likely to find it now, and find it quickly. Additionally, KNOWING that fact allows me to, again, code with more confidence, try more ideas, and end up with better code. Now, you might think you should start coding for every eventuality, at least from assuming errors such as checking for null and isNaN like crazy, but quite the opposite. A lot of the runtime errors can point out problems pretty quickly, and the catch here is they point them out in both quickly written code AND well architected code. The point here is that even well architected code will have problems you don’t forsee. What I end up doing is using my best guess at the time, using foresight based on our past UAT and other project detail discussions, and moving on with life. Stressing too much about one section is a waste of time; if it works, rad, move forward. You may rewrite it again later anyway…

What Doesn’t Change and What Does

Experience has really taught me what to code quickly, what to architect well, and all the in betweens. I haven’t got it all figured out yet, but I DO know of some sections that usually never change, and ones that change all the time.

The ones that never change are the service layer. This is your Business Delegates in Cairngorm, or your Remote Proxies in PureMVC (or if you’re like me, your Business Delegates that your PureMVC proxies call). If they DO change, it’s because the server-side developer changed the the name of the service, or the location. Whoop-pu-dee-doo… 1 line of code in either the class or your ServiceLocator. If you’re delegates/proxies use Factories to actually parse the server’s returned data (XML, JSON, AMF, etc.) then you’re even more insulated. Again, middle tier technology doesn’t really change in the middle of a project.

A data model change usually affects your entire application. For example, if you change the data structure of a Person object (PersonVO), suddenly your Factory changes, your VO’s change, any Controller classes modifying PersonVO’s (such as Commands in Cairngorm or Proxies in PureMVC, and potentially Commands as well), and any View’s that represent or edit them.

If you’re creating complicated View’s, whether based on a design comp with little detail, or it’s not a conventional GUI control, it will definitely change over time once someone uses it and gives feedback. Any View based on a list of dynamic data that needs to draw a bunch of children that represent a ValueObject, such as a repeater or a custom Chart will go through extreme refactoring; both modifications of item renderers and drawing performance improvements if you don’t extend List and do your own drawing routines.

View’s such as your main Application file, an optional MainView, a Login, and Menu’s do not change assuming you use 1 CSS file and straight forward skinning. Most Event and Utility classes just get added to; you don’t really change them, rather you add or remove class properties and/or methods, but their names and package structure stay the same.

For Cairngorm Commands, they just grow in scope as the development age of your application increases. Since PureMVC Commands delegate a lot of this Model modification off to Proxies, those Proxies tend to grow in scope as the complexity of your data interactions increase. They only get waxed or massively changed if your data model does. This doesn’t really happen later in the project.

The above is totally a case by case basis, but has been consistent on a lot of my projects. Your mileage may, and most likely will, vary.

The Con to Refactoring

There are a few cons to soo much refactoring. The first is, some clients don’t understand why you’re coding the same thing twice… or more, especially when Scrum is supposed to be about getting it done quickly vs. over-thinking it. In my experience, if you can speak intelligently at a high level, you can explain each refactoring part away. I can’t, so usually explain it to a project manager who’s capable of translating it in lamens terms to a client.

The second thing is that it makes merging on Merge Day a TON harder. You may have already refactored like twice the week before, and totally forgotten all the details of why you did. Suddenly, 4 days later (every other Wednesday in our case), you’re having an insanely hard time merging code from your branch(es) into trunk. This may require a long conversation with your team members, and you are struggling to remember why you made such massive code changes.

Even if you do remember, the other developer may feel a little frustrated if you didn’t invite them into the code refactoring change discussion for something you may at the time have felt was trivial. It probably was trivial, it’s just blown out of proportion now since merging is always stressful. Either that, or you just spend a few hours getting trunk working again. If I totally wax something, I’ll usually put a large drawn out code comment to explain why. Additionally, I’ll do the same thing in SVN check in comments.

The third thing is it’s a project manager’s nightmare. If she doesn’t have enough forewarning of these and their possible affect on not getting a single or set of user stories done by the end of the sprint, it can be a bad surprise. Communicating these during the daily standup meeting with potential ramifications is best. It can also make planning future sprints challenging as well. If your team has been chugging along at an average 12 points per sprint for 3 consecutive sprints, and suddenly in sprint #4 you spend 60% of your time refactoring, you’re clearly going to finish with a lot less points in user stories completed.

This sets the project manager up for failure. They cannot effectively communicate projected progress to the client, nor visibility into the current progress of the app since something that worked for awhile may suddenly break in the next UAT. You’re supposed to be completing user stories, not creating new ones that break old ones. Again, forewarning is the only thing I know immediately to do. I’m not sure what doing too much refactoring is a symptom of yet. Most so far on my current project, and past ones, have been for random reasons.


I really like how fast I can code some things in Agile. Other things have stayed the same, but the overriding goal of “get it working, but don’t write crap code” is such a high bar… and I love it. It’s the same speed of agency coding, only you know you’ll have to live with the code (aka potentially eating your own mess) so you end up producing better code than you would in an agency setting.

I also like either drawing on experience, or just making challenging inferences, on what to architect well, and on other parts to just get something working without too much thought. It’s nice to have the variety.

Finally, I’m not sure what to think of the refactoring. I like that it’s “ok” and an expected part of the process, but I feel that my project is unique in the amount I’m personally doing. My coworker for example isn’t doing as much as I’m doing at all; he’s chugging along on other user stories and is set to beat me, again, in point values for user stories completed at the end of this sprint. We’re really pushing the limits of Flash Player here, and only one section in this large app is really this challenging; the rest are your run of the mill Flex screens. So, it sounds to me like the “on average 30% of your time is spent refactoring per sprint” still applies. There is no way I’ll be refactoring this much on some of the easier sections in future sprints.

Stay tuned for #3 in the Agile Chronicles series where I talk about every developer using their own Branch in Subversion.

More Stories By Jesse Randall Warden

Jesse R. Warden, a member of the Editorial Board of Web Developer's & Designer's Journal, is a Flex, Flash and Flash Lite consultant for Universal Mind. A professional multimedia developer, he maintains a Website at jessewarden.com where he writes about technical topics that relate to Flash and Flex.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
More and more brands have jumped on the IoT bandwagon. We have an excess of wearables – activity trackers, smartwatches, smart glasses and sneakers, and more that track seemingly endless datapoints. However, most consumers have no idea what “IoT” means. Creating more wearables that track data shouldn't be the aim of brands; delivering meaningful, tangible relevance to their users should be. We're in a period in which the IoT pendulum is still swinging. Initially, it swung toward "smart for smar...
Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.
In past @ThingsExpo presentations, Joseph di Paolantonio has explored how various Internet of Things (IoT) and data management and analytics (DMA) solution spaces will come together as sensor analytics ecosystems. This year, in his session at @ThingsExpo, Joseph di Paolantonio from DataArchon, will be adding the numerous Transportation areas, from autonomous vehicles to “Uber for containers.” While IoT data in any one area of Transportation will have a huge impact in that area, combining sensor...
Intelligent machines are here. Robots, self-driving cars, drones, bots and many IoT devices are becoming smarter with Machine Learning. In her session at @ThingsExpo, Sudha Jamthe, CEO of IoTDisruptions.com, will discuss the next wave of business disruption at the junction of IoT and AI, impacting many industries and set to change our lives, work and world as we know it.
With an estimated 50 billion devices connected to the Internet by 2020, several industries will begin to expand their capabilities for retaining end point data at the edge to better utilize the range of data types and sheer volume of M2M data generated by the Internet of Things. In his session at @ThingsExpo, Don DeLoach, CEO and President of Infobright, discussed the infrastructures businesses will need to implement to handle this explosion of data by providing specific use cases for filterin...
What happens when the different parts of a vehicle become smarter than the vehicle itself? As we move toward the era of smart everything, hundreds of entities in a vehicle that communicate with each other, the vehicle and external systems create a need for identity orchestration so that all entities work as a conglomerate. Much like an orchestra without a conductor, without the ability to secure, control, and connect the link between a vehicle’s head unit, devices, and systems and to manage the ...
Ask someone to architect an Internet of Things (IoT) solution and you are guaranteed to see a reference to the cloud. This would lead you to believe that IoT requires the cloud to exist. However, there are many IoT use cases where the cloud is not feasible or desirable. In his session at @ThingsExpo, Dave McCarthy, Director of Products at Bsquare Corporation, will discuss the strategies that exist to extend intelligence directly to IoT devices and sensors, freeing them from the constraints of ...
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
@ThingsExpo has been named the Top 5 Most Influential M2M Brand by Onalytica in the ‘Machine to Machine: Top 100 Influencers and Brands.' Onalytica analyzed the online debate on M2M by looking at over 85,000 tweets to provide the most influential individuals and brands that drive the discussion. According to Onalytica the "analysis showed a very engaged community with a lot of interactive tweets. The M2M discussion seems to be more fragmented and driven by some of the major brands present in the...
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...
Amazon has gradually rolled out parts of its IoT offerings, but these are just the tip of the iceberg. In addition to optimizing their backend AWS offerings, Amazon is laying the ground work to be a major force in IoT - especially in the connected home and office. In his session at @ThingsExpo, Chris Kocher, founder and managing director of Grey Heron, explained how Amazon is extending its reach to become a major force in IoT by building on its dominant cloud IoT platform, its Dash Button strat...
SYS-CON Events announced today that Streamlyzer will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Streamlyzer is a powerful analytics for video streaming service that enables video streaming providers to monitor and analyze QoE (Quality-of-Experience) from end-user devices in real time.
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
Cloud based infrastructure deployment is becoming more and more appealing to customers, from Fortune 500 companies to SMEs due to its pay-as-you-go model. Enterprise storage vendors are able to reach out to these customers by integrating in cloud based deployments; this needs adaptability and interoperability of the products confirming to cloud standards such as OpenStack, CloudStack, or Azure. As compared to off the shelf commodity storage, enterprise storages by its reliability, high-availabil...
The IoT industry is now at a crossroads, between the fast-paced innovation of technologies and the pending mass adoption by global enterprises. The complexity of combining rapidly evolving technologies and the need to establish practices for market acceleration pose a strong challenge to global enterprises as well as IoT vendors. In his session at @ThingsExpo, Clark Smith, senior product manager for Numerex, will discuss how Numerex, as an experienced, established IoT provider, has embraced a ...
The Internet of Things (IoT), in all its myriad manifestations, has great potential. Much of that potential comes from the evolving data management and analytic (DMA) technologies and processes that allow us to gain insight from all of the IoT data that can be generated and gathered. This potential may never be met as those data sets are tied to specific industry verticals and single markets, with no clear way to use IoT data and sensor analytics to fulfill the hype being given the IoT today.
Donna Yasay, President of HomeGrid Forum, today discussed with a panel of technology peers how certification programs are at the forefront of interoperability, and the answer for vendors looking to keep up with today's growing industry for smart home innovation. "To ensure multi-vendor interoperability, accredited industry certification programs should be used for every product to provide credibility and quality assurance for retail and carrier based customers looking to add ever increasing num...
“Media Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. CloudBerry Backup is a leading cross-platform cloud backup and disaster recovery solution integrated with major public cloud services, such as Amazon Web Services, Microsoft Azure and Google Cloud Platform.
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, will discuss how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team a...
In the next forty months – just over three years – businesses will undergo extraordinary changes. The exponential growth of digitization and machine learning will see a step function change in how businesses create value, satisfy customers, and outperform their competition. In the next forty months companies will take the actions that will see them get to the next level of the game called Capitalism. Or they won’t – game over. The winners of today and tomorrow think differently, follow different...