Welcome!

Web 2.0 Authors: Liz McMillan, Pat Romanski, Elizabeth White, Natalie Lerner, Dana Gardner

Related Topics: Adobe Flex

Adobe Flex: Article

Agile Chronicles #2: Code Refactoring

If we could see changes ahead of time, there’d be no need for the Agile process in the first place

This entry is about the joy of coding quickly, finding the balance between getting something done quickly vs. architecting for the future, and dealing with the massive amount of re-factoring that’s entailed in iterative Scrum development.

Coding Quickly

I’m coding like I’m in Flash again. Instead of spending 3 weeks setting up Cairngorm or PureMVC with all your use cases, agreeing on the framework implementation details with coworkers, and getting enough of a foundation together that you can actually compile the application and start seeing screens, you instead make a mad dash to get app working in just a day or less.

Rather than discussing with your team what the best ValueObject structure is and how your service layer should work, you instead get a login service working in under 40 minutes. If something changes massively, such as the data structure of the user object returned, you just modify or delete & rewrite the entire ValueObject. You didn’t spend a lot of time on it anyway, so it’s not like your “architecture masterpiece” is getting deleted; it’s just some scaffolding code to get you up and running.

Coding For the Future

…yet, it’s not scaffolding; it’s real code that needs to work, and work the entire project. Deciding how much to write well & encapsulated vs. just getting it done is extremely challenging, and fun. When do you git-r-done and when do you over-architect? How much and where? Hard questions to answer, fun times. Part foresight, part gambling, all calculated risk taking.

You know your service layer, the code that talks to the back-end probably WON’T change. It’s extremely unlikely that in the middle of your project, you’ll switch from .NET and XML to PHP and AMF. Therefore, spending more time architecting that portion can be done so with confidence in using the extra time it takes.

Anyone from a design agency should already find that very familiar. You have a series of impossible deadlines, and arrogant programmers (like me) exclaiming you must utilize OOP, design patterns, and frameworks. You’re challenged with meeting your deadline(s), trying to do right where you can, and learning throughout the process.

This is slightly different in Agile for product development (or even service development) in that once launched, your application doesn’t have a limited shelf life. It’s an actual product. Traditionally software is used 3 to 5 times its original intended lifespan, although, I’d argue with web software that is lessening. Even before launch, you’ll be extending certain areas, and expecting it to perform solidly. Deciding what to hack together for deadline’s sake, and what to invest well thought out architecture time in is really hard. REALLY hard. And fun!

UAT’s As Checkpoints

During sprint UAT’s (every other Friday for my team), or even just posting the latest working build for the team to see, you’ll inevitably question certain functionality and performance. ”Why is that screen so slow to load?”, “Getting to this screen is more tedious than it should be…”, or “My RAM and CPU usage are through the roof!”. The designer may see their designed creation in action, and totally change their mind on how it should look or work. The stakeholders, after using it, may realize that it totally doesn’t solve their original goal(s) like they thought it would. You may even notice a bunch of positive enhancements to make on already working sections.

This may sound frustrating, but it’s good for a bunch of reasons. First off, this is the main reason Waterfall fails as a process. None of these things can happen until the project is COMPLETE, in the Validation phase where you validate the software is on spec. A lot of you may already have had those things happen during a project; now imagine none happening until the entire product is complete. It’s a lot harder to change that much code that late in the game. You now have the opportunity to fix bad decisions, improve design implementations, and add enhancements… early! This is when they can have the most positive impact, reduce risk, and get battle tested more.

Secondly, when you go to fix something, you can code with more confidence since the functionality has at least been used. Programmers second-guess themselves all the time. They have to; early decisions made incorrectly can have disastrous consequences later (quoted from one of the Pragmatic Programmer authors in an interview). It’s really frustrating to be insecure about how a user story actually works. After getting it “working” in a reasonable timeframe, and using & discussing it, you can have confidence in what you code is more “correct”. Well… almost.

Third, your design gets more real. After banging on the implemented version of the design comps, your designer/UX person can make better decisions if their design is actually working, and the programmers can collaboratively discuss how to change/improve it. This assumes your designer/UX person hasn’t moved onto another project by this point; keeping them on retainer for at least 4 hours a week is helpful for the project.

Fourth, you get confirmation certain problems are in fact real problems. You may think something is slow, but if no one notices but you, does it really matter? Naturally, your ego as a programmer is inclined to fix it anyway, but remember, your goal is get things done, not fix something that isn’t broken. Same goes for problems you know of and other people see; it is just an iron-clad check mark that something is in fact a problem and needs to be addressed. If you have performance problems for full-screen video on your Mac in Safari and Firefox, and so does your project manager in Windows in IE, Firefox, and Safari, then you can confidently infer that the majority of other people will too.

Granted, testing with more than 2 people is preferred, but the point here is that you get a helpful checkpoint with a 2nd set of eyes. Coding this quickly without too much care to architecture, juggling a lot of moving pieces is a lot to handle. Having a helpful team member confirm an issue early is better than finding it months later in QA, even if you knew about it and forgot. Bottom line, using a UAT as an early checkpoint for completed user stories ensures they truly are complete and good and points out problems or potential enhancements early.

Refactoring


The above leads to refactoring; re-writing or modifying existing code. A lot of times refactoring is a pipe dream. Usually you’re so focused on getting things done, having time to make something work better or faster, even the possibility, is the carrot that can keep you going.

Not in Agile. Based on the past 5 weeks and talking to Darrell (my project manager at Enablus), you re-factor on average 30% of your code per Sprint. You’re coding so fast and so furiously, that not everything is encapsulated as much as it could be (except for my service layer, it’s tight baby!). Not only that, but as you see the software in action, you can then start making valid changes. Maybe the functionality didn’t work as good as you originally thought it would, or perhaps you suddenly realize, now that you see it, that it needs something added.

While this is easy from a user story perspective, just modifying an existing user story or adding a new one, it may not be so straightforward in code. A lot of times, there was no way you could foresee the change you are now tasked at making.

If we COULD see those changes ahead of time, there’d be no need for the Agile process in the first place.

This means that some of your code needs to be majorely re-worked, or even just thrown away and done from scratch.

While you’re technically working on a user story, you’re potentially breaking another. It’s not necessarily spaghetti code, but it’s certainly not Orgathoganal by the Pragmatic Programmer’s definition… unless you’ve architected that section out already, you’re a bad ass, or lucky. I’d argue the 30% is a loose average. In the first sprint, I didn’t re-factor anything, nor the 2nd week into Sprint #2. In the 2nd and 3rd sprint, I was re-factoring up to 40%. In Sprint #4, it’ll definitely be at least 40% again. The 40% arose from taking 3 tries to get a piece of functionality the designer wanted correct. The 40% next Sprint accounts for my bitmap caching engine suddenly needing to save not just 1, but types of ValueObjects, and all the existing View’s that now need to support both.

Not to mention the fact we were working with the server-side team for the first time and still figuring things out. The percentages are not indicative of the entire code-base, but rather, my time spent the entire sprint (2 weeks). All this while working on new user stories…

For example, while you originally stated a user story would only be a “2 - mostly easy”, it ended up taking you a total of 5 days to complete because you were re-factoring and fixing other existing user stories that it related to. This can lead to the perception that your original point estimations are inaccurate when in reality, your point estimations are accurate, it’s just there is no adjustments made for re-factoring. This isn’t always necessarily taken into consideration when forming a point average for what your team can complete each sprint. Some sprints, you hit your “20 average” and another, you only hit 15, but you could have possibility re-factored 7 points worth of existing user stories, thus skewing the results.

Refactoring really confirms how much you wish you could predict the future. As I’ve stated before, sometimes it’s easier to just start from scratch again on a certain component now that you know better how it’s supposed to work. The original piece of code may have been really small and not well thought out in the first place for the sake of time. That’s totally fine, as the mere fact you’re deleting it and starting from scratch atests to it being a good decision at the time. Other times, however, you’ll notice you’ll have to do some major changes to a bunch of different classes, of which because not everything is encapsulated, you may suddenly feel like it’s spaghetti code; changing one thing breaks another, totally unrelated piece.

I will say with ActionScript 3, strong-typing and runtime exceptions have really helped me refactor A LOT faster than in the past. I can “break with confidence”, even if I know my code is crap (it isn’t, I’m just going for dramatic effect here… *ahem*). This has really helped remove the “fear” factor you can get with touching code. It’s one thing to have your code build trust with you. You really thought about its architecture, beat on it some, and it held up. Cool, your code has built some trust with you. In coding quickly in Scrum, however, how much trust do you really have when only parts are uber-solid? Knowing that your code is going into a real world product people are paying for doesn’t lessen the pressure and stress.

Again, AS3 has really helped me here. If there is a problem, I’m more likely to find it now, and find it quickly. Additionally, KNOWING that fact allows me to, again, code with more confidence, try more ideas, and end up with better code. Now, you might think you should start coding for every eventuality, at least from assuming errors such as checking for null and isNaN like crazy, but quite the opposite. A lot of the runtime errors can point out problems pretty quickly, and the catch here is they point them out in both quickly written code AND well architected code. The point here is that even well architected code will have problems you don’t forsee. What I end up doing is using my best guess at the time, using foresight based on our past UAT and other project detail discussions, and moving on with life. Stressing too much about one section is a waste of time; if it works, rad, move forward. You may rewrite it again later anyway…

What Doesn’t Change and What Does

Experience has really taught me what to code quickly, what to architect well, and all the in betweens. I haven’t got it all figured out yet, but I DO know of some sections that usually never change, and ones that change all the time.

The ones that never change are the service layer. This is your Business Delegates in Cairngorm, or your Remote Proxies in PureMVC (or if you’re like me, your Business Delegates that your PureMVC proxies call). If they DO change, it’s because the server-side developer changed the the name of the service, or the location. Whoop-pu-dee-doo… 1 line of code in either the class or your ServiceLocator. If you’re delegates/proxies use Factories to actually parse the server’s returned data (XML, JSON, AMF, etc.) then you’re even more insulated. Again, middle tier technology doesn’t really change in the middle of a project.

A data model change usually affects your entire application. For example, if you change the data structure of a Person object (PersonVO), suddenly your Factory changes, your VO’s change, any Controller classes modifying PersonVO’s (such as Commands in Cairngorm or Proxies in PureMVC, and potentially Commands as well), and any View’s that represent or edit them.

If you’re creating complicated View’s, whether based on a design comp with little detail, or it’s not a conventional GUI control, it will definitely change over time once someone uses it and gives feedback. Any View based on a list of dynamic data that needs to draw a bunch of children that represent a ValueObject, such as a repeater or a custom Chart will go through extreme refactoring; both modifications of item renderers and drawing performance improvements if you don’t extend List and do your own drawing routines.

View’s such as your main Application file, an optional MainView, a Login, and Menu’s do not change assuming you use 1 CSS file and straight forward skinning. Most Event and Utility classes just get added to; you don’t really change them, rather you add or remove class properties and/or methods, but their names and package structure stay the same.

For Cairngorm Commands, they just grow in scope as the development age of your application increases. Since PureMVC Commands delegate a lot of this Model modification off to Proxies, those Proxies tend to grow in scope as the complexity of your data interactions increase. They only get waxed or massively changed if your data model does. This doesn’t really happen later in the project.

The above is totally a case by case basis, but has been consistent on a lot of my projects. Your mileage may, and most likely will, vary.

The Con to Refactoring

There are a few cons to soo much refactoring. The first is, some clients don’t understand why you’re coding the same thing twice… or more, especially when Scrum is supposed to be about getting it done quickly vs. over-thinking it. In my experience, if you can speak intelligently at a high level, you can explain each refactoring part away. I can’t, so usually explain it to a project manager who’s capable of translating it in lamens terms to a client.

The second thing is that it makes merging on Merge Day a TON harder. You may have already refactored like twice the week before, and totally forgotten all the details of why you did. Suddenly, 4 days later (every other Wednesday in our case), you’re having an insanely hard time merging code from your branch(es) into trunk. This may require a long conversation with your team members, and you are struggling to remember why you made such massive code changes.

Even if you do remember, the other developer may feel a little frustrated if you didn’t invite them into the code refactoring change discussion for something you may at the time have felt was trivial. It probably was trivial, it’s just blown out of proportion now since merging is always stressful. Either that, or you just spend a few hours getting trunk working again. If I totally wax something, I’ll usually put a large drawn out code comment to explain why. Additionally, I’ll do the same thing in SVN check in comments.

The third thing is it’s a project manager’s nightmare. If she doesn’t have enough forewarning of these and their possible affect on not getting a single or set of user stories done by the end of the sprint, it can be a bad surprise. Communicating these during the daily standup meeting with potential ramifications is best. It can also make planning future sprints challenging as well. If your team has been chugging along at an average 12 points per sprint for 3 consecutive sprints, and suddenly in sprint #4 you spend 60% of your time refactoring, you’re clearly going to finish with a lot less points in user stories completed.

This sets the project manager up for failure. They cannot effectively communicate projected progress to the client, nor visibility into the current progress of the app since something that worked for awhile may suddenly break in the next UAT. You’re supposed to be completing user stories, not creating new ones that break old ones. Again, forewarning is the only thing I know immediately to do. I’m not sure what doing too much refactoring is a symptom of yet. Most so far on my current project, and past ones, have been for random reasons.

Conclusions

I really like how fast I can code some things in Agile. Other things have stayed the same, but the overriding goal of “get it working, but don’t write crap code” is such a high bar… and I love it. It’s the same speed of agency coding, only you know you’ll have to live with the code (aka potentially eating your own mess) so you end up producing better code than you would in an agency setting.

I also like either drawing on experience, or just making challenging inferences, on what to architect well, and on other parts to just get something working without too much thought. It’s nice to have the variety.

Finally, I’m not sure what to think of the refactoring. I like that it’s “ok” and an expected part of the process, but I feel that my project is unique in the amount I’m personally doing. My coworker for example isn’t doing as much as I’m doing at all; he’s chugging along on other user stories and is set to beat me, again, in point values for user stories completed at the end of this sprint. We’re really pushing the limits of Flash Player here, and only one section in this large app is really this challenging; the rest are your run of the mill Flex screens. So, it sounds to me like the “on average 30% of your time is spent refactoring per sprint” still applies. There is no way I’ll be refactoring this much on some of the easier sections in future sprints.

Stay tuned for #3 in the Agile Chronicles series where I talk about every developer using their own Branch in Subversion.

More Stories By Jesse Randall Warden

Jesse R. Warden, a member of the Editorial Board of Web Developer's & Designer's Journal, is a Flex, Flash and Flash Lite consultant for Universal Mind. A professional multimedia developer, he maintains a Website at jessewarden.com where he writes about technical topics that relate to Flash and Flex.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at @ThingsExpo, Robin Raymond, Chief Architect at Hookflash, will walk through the shifting landscape of traditional telephone and voice services ...
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. According to a recent IDG Research Services Survey this rate of traffic will only grow. What's driving t...
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can't be addressed w...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, described how to revolutioniz...
Bit6 today issued a challenge to the technology community implementing Web Real Time Communication (WebRTC). To leap beyond WebRTC’s significant limitations and fully leverage its underlying value to accelerate innovation, application developers need to consider the entire communications ecosystem.
The definition of IoT is not new, in fact it’s been around for over a decade. What has changed is the public's awareness that the technology we use on a daily basis has caught up on the vision of an always on, always connected world. If you look into the details of what comprises the IoT, you’ll see that it includes everything from cloud computing, Big Data analytics, “Things,” Web communication, applications, network, storage, etc. It is essentially including everything connected online from hardware to software, or as we like to say, it’s an Internet of many different things. The difference ...
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
SYS-CON Events announced today that Windstream, a leading provider of advanced network and cloud communications, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Windstream (Nasdaq: WIN), a FORTUNE 500 and S&P 500 company, is a leading provider of advanced network communications, including cloud computing and managed services, to businesses nationwide. The company also offers broadband, phone and digital TV services to consumers primarily in rural areas.
"There is a natural synchronization between the business models, the IoT is there to support ,” explained Brendan O'Brien, Co-founder and Chief Architect of Aria Systems, in this SYS-CON.tv interview at the 15th International Cloud Expo®, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com), moderated by Ashar Baig, Research Director, Cloud, at Gigaom Research, Nate Gordon, Director of T...

ARMONK, N.Y., Nov. 20, 2014 /PRNewswire/ --  IBM (NYSE: IBM) today announced that it is bringing a greater level of control, security and flexibility to cloud-based application development and delivery with a single-tenant version of Bluemix, IBM's platform-as-a-service. The new platform enables developers to build ap...

An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. He is co-author of t...
Technology is enabling a new approach to collecting and using data. This approach, commonly referred to as the "Internet of Things" (IoT), enables businesses to use real-time data from all sorts of things including machines, devices and sensors to make better decisions, improve customer service, and lower the risk in the creation of new revenue opportunities. In his General Session at Internet of @ThingsExpo, Dave Wagstaff, Vice President and Chief Architect at BSQUARE Corporation, discuss the real benefits to focus on, how to understand the requirements of a successful solution, the flow of ...
The security devil is always in the details of the attack: the ones you've endured, the ones you prepare yourself to fend off, and the ones that, you fear, will catch you completely unaware and defenseless. The Internet of Things (IoT) is nothing if not an endless proliferation of details. It's the vision of a world in which continuous Internet connectivity and addressability is embedded into a growing range of human artifacts, into the natural world, and even into our smartphones, appliances, and physical persons. In the IoT vision, every new "thing" - sensor, actuator, data source, data con...
"BSQUARE is in the business of selling software solutions for smart connected devices. It's obvious that IoT has moved from being a technology to being a fundamental part of business, and in the last 18 months people have said let's figure out how to do it and let's put some focus on it, " explained Dave Wagstaff, VP & Chief Architect, at BSQUARE Corporation, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Focused on this fast-growing market’s needs, Vitesse Semiconductor Corporation (Nasdaq: VTSS), a leading provider of IC solutions to advance "Ethernet Everywhere" in Carrier, Enterprise and Internet of Things (IoT) networks, introduced its IStaX™ software (VSC6815SDK), a robust protocol stack to simplify deployment and management of Industrial-IoT network applications such as Industrial Ethernet switching, surveillance, video distribution, LCD signage, intelligent sensors, and metering equipment. Leveraging technologies proven in the Carrier and Enterprise markets, IStaX is designed to work ac...
C-Labs LLC, a leading provider of remote and mobile access for the Internet of Things (IoT), announced the appointment of John Traynor to the position of chief operating officer. Previously a strategic advisor to the firm, Mr. Traynor will now oversee sales, marketing, finance, and operations. Mr. Traynor is based out of the C-Labs office in Redmond, Washington. He reports to Chris Muench, Chief Executive Officer. Mr. Traynor brings valuable business leadership and technology industry expertise to C-Labs. With over 30 years' experience in the high-tech sector, John Traynor has held numerous...
The 3rd International @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that it is now accepting Keynote Proposals. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades.