Click here to close now.

Welcome!

Web 2.0 Authors: Carmen Gonzalez, Plutora Blog, John Savageau, Lori MacVittie, Mike Kavis

Related Topics: Adobe Flex

Adobe Flex: Article

Agile Chronicles #2: Code Refactoring

If we could see changes ahead of time, there’d be no need for the Agile process in the first place

This entry is about the joy of coding quickly, finding the balance between getting something done quickly vs. architecting for the future, and dealing with the massive amount of re-factoring that’s entailed in iterative Scrum development.

Coding Quickly

I’m coding like I’m in Flash again. Instead of spending 3 weeks setting up Cairngorm or PureMVC with all your use cases, agreeing on the framework implementation details with coworkers, and getting enough of a foundation together that you can actually compile the application and start seeing screens, you instead make a mad dash to get app working in just a day or less.

Rather than discussing with your team what the best ValueObject structure is and how your service layer should work, you instead get a login service working in under 40 minutes. If something changes massively, such as the data structure of the user object returned, you just modify or delete & rewrite the entire ValueObject. You didn’t spend a lot of time on it anyway, so it’s not like your “architecture masterpiece” is getting deleted; it’s just some scaffolding code to get you up and running.

Coding For the Future

…yet, it’s not scaffolding; it’s real code that needs to work, and work the entire project. Deciding how much to write well & encapsulated vs. just getting it done is extremely challenging, and fun. When do you git-r-done and when do you over-architect? How much and where? Hard questions to answer, fun times. Part foresight, part gambling, all calculated risk taking.

You know your service layer, the code that talks to the back-end probably WON’T change. It’s extremely unlikely that in the middle of your project, you’ll switch from .NET and XML to PHP and AMF. Therefore, spending more time architecting that portion can be done so with confidence in using the extra time it takes.

Anyone from a design agency should already find that very familiar. You have a series of impossible deadlines, and arrogant programmers (like me) exclaiming you must utilize OOP, design patterns, and frameworks. You’re challenged with meeting your deadline(s), trying to do right where you can, and learning throughout the process.

This is slightly different in Agile for product development (or even service development) in that once launched, your application doesn’t have a limited shelf life. It’s an actual product. Traditionally software is used 3 to 5 times its original intended lifespan, although, I’d argue with web software that is lessening. Even before launch, you’ll be extending certain areas, and expecting it to perform solidly. Deciding what to hack together for deadline’s sake, and what to invest well thought out architecture time in is really hard. REALLY hard. And fun!

UAT’s As Checkpoints

During sprint UAT’s (every other Friday for my team), or even just posting the latest working build for the team to see, you’ll inevitably question certain functionality and performance. ”Why is that screen so slow to load?”, “Getting to this screen is more tedious than it should be…”, or “My RAM and CPU usage are through the roof!”. The designer may see their designed creation in action, and totally change their mind on how it should look or work. The stakeholders, after using it, may realize that it totally doesn’t solve their original goal(s) like they thought it would. You may even notice a bunch of positive enhancements to make on already working sections.

This may sound frustrating, but it’s good for a bunch of reasons. First off, this is the main reason Waterfall fails as a process. None of these things can happen until the project is COMPLETE, in the Validation phase where you validate the software is on spec. A lot of you may already have had those things happen during a project; now imagine none happening until the entire product is complete. It’s a lot harder to change that much code that late in the game. You now have the opportunity to fix bad decisions, improve design implementations, and add enhancements… early! This is when they can have the most positive impact, reduce risk, and get battle tested more.

Secondly, when you go to fix something, you can code with more confidence since the functionality has at least been used. Programmers second-guess themselves all the time. They have to; early decisions made incorrectly can have disastrous consequences later (quoted from one of the Pragmatic Programmer authors in an interview). It’s really frustrating to be insecure about how a user story actually works. After getting it “working” in a reasonable timeframe, and using & discussing it, you can have confidence in what you code is more “correct”. Well… almost.

Third, your design gets more real. After banging on the implemented version of the design comps, your designer/UX person can make better decisions if their design is actually working, and the programmers can collaboratively discuss how to change/improve it. This assumes your designer/UX person hasn’t moved onto another project by this point; keeping them on retainer for at least 4 hours a week is helpful for the project.

Fourth, you get confirmation certain problems are in fact real problems. You may think something is slow, but if no one notices but you, does it really matter? Naturally, your ego as a programmer is inclined to fix it anyway, but remember, your goal is get things done, not fix something that isn’t broken. Same goes for problems you know of and other people see; it is just an iron-clad check mark that something is in fact a problem and needs to be addressed. If you have performance problems for full-screen video on your Mac in Safari and Firefox, and so does your project manager in Windows in IE, Firefox, and Safari, then you can confidently infer that the majority of other people will too.

Granted, testing with more than 2 people is preferred, but the point here is that you get a helpful checkpoint with a 2nd set of eyes. Coding this quickly without too much care to architecture, juggling a lot of moving pieces is a lot to handle. Having a helpful team member confirm an issue early is better than finding it months later in QA, even if you knew about it and forgot. Bottom line, using a UAT as an early checkpoint for completed user stories ensures they truly are complete and good and points out problems or potential enhancements early.

Refactoring


The above leads to refactoring; re-writing or modifying existing code. A lot of times refactoring is a pipe dream. Usually you’re so focused on getting things done, having time to make something work better or faster, even the possibility, is the carrot that can keep you going.

Not in Agile. Based on the past 5 weeks and talking to Darrell (my project manager at Enablus), you re-factor on average 30% of your code per Sprint. You’re coding so fast and so furiously, that not everything is encapsulated as much as it could be (except for my service layer, it’s tight baby!). Not only that, but as you see the software in action, you can then start making valid changes. Maybe the functionality didn’t work as good as you originally thought it would, or perhaps you suddenly realize, now that you see it, that it needs something added.

While this is easy from a user story perspective, just modifying an existing user story or adding a new one, it may not be so straightforward in code. A lot of times, there was no way you could foresee the change you are now tasked at making.

If we COULD see those changes ahead of time, there’d be no need for the Agile process in the first place.

This means that some of your code needs to be majorely re-worked, or even just thrown away and done from scratch.

While you’re technically working on a user story, you’re potentially breaking another. It’s not necessarily spaghetti code, but it’s certainly not Orgathoganal by the Pragmatic Programmer’s definition… unless you’ve architected that section out already, you’re a bad ass, or lucky. I’d argue the 30% is a loose average. In the first sprint, I didn’t re-factor anything, nor the 2nd week into Sprint #2. In the 2nd and 3rd sprint, I was re-factoring up to 40%. In Sprint #4, it’ll definitely be at least 40% again. The 40% arose from taking 3 tries to get a piece of functionality the designer wanted correct. The 40% next Sprint accounts for my bitmap caching engine suddenly needing to save not just 1, but types of ValueObjects, and all the existing View’s that now need to support both.

Not to mention the fact we were working with the server-side team for the first time and still figuring things out. The percentages are not indicative of the entire code-base, but rather, my time spent the entire sprint (2 weeks). All this while working on new user stories…

For example, while you originally stated a user story would only be a “2 - mostly easy”, it ended up taking you a total of 5 days to complete because you were re-factoring and fixing other existing user stories that it related to. This can lead to the perception that your original point estimations are inaccurate when in reality, your point estimations are accurate, it’s just there is no adjustments made for re-factoring. This isn’t always necessarily taken into consideration when forming a point average for what your team can complete each sprint. Some sprints, you hit your “20 average” and another, you only hit 15, but you could have possibility re-factored 7 points worth of existing user stories, thus skewing the results.

Refactoring really confirms how much you wish you could predict the future. As I’ve stated before, sometimes it’s easier to just start from scratch again on a certain component now that you know better how it’s supposed to work. The original piece of code may have been really small and not well thought out in the first place for the sake of time. That’s totally fine, as the mere fact you’re deleting it and starting from scratch atests to it being a good decision at the time. Other times, however, you’ll notice you’ll have to do some major changes to a bunch of different classes, of which because not everything is encapsulated, you may suddenly feel like it’s spaghetti code; changing one thing breaks another, totally unrelated piece.

I will say with ActionScript 3, strong-typing and runtime exceptions have really helped me refactor A LOT faster than in the past. I can “break with confidence”, even if I know my code is crap (it isn’t, I’m just going for dramatic effect here… *ahem*). This has really helped remove the “fear” factor you can get with touching code. It’s one thing to have your code build trust with you. You really thought about its architecture, beat on it some, and it held up. Cool, your code has built some trust with you. In coding quickly in Scrum, however, how much trust do you really have when only parts are uber-solid? Knowing that your code is going into a real world product people are paying for doesn’t lessen the pressure and stress.

Again, AS3 has really helped me here. If there is a problem, I’m more likely to find it now, and find it quickly. Additionally, KNOWING that fact allows me to, again, code with more confidence, try more ideas, and end up with better code. Now, you might think you should start coding for every eventuality, at least from assuming errors such as checking for null and isNaN like crazy, but quite the opposite. A lot of the runtime errors can point out problems pretty quickly, and the catch here is they point them out in both quickly written code AND well architected code. The point here is that even well architected code will have problems you don’t forsee. What I end up doing is using my best guess at the time, using foresight based on our past UAT and other project detail discussions, and moving on with life. Stressing too much about one section is a waste of time; if it works, rad, move forward. You may rewrite it again later anyway…

What Doesn’t Change and What Does

Experience has really taught me what to code quickly, what to architect well, and all the in betweens. I haven’t got it all figured out yet, but I DO know of some sections that usually never change, and ones that change all the time.

The ones that never change are the service layer. This is your Business Delegates in Cairngorm, or your Remote Proxies in PureMVC (or if you’re like me, your Business Delegates that your PureMVC proxies call). If they DO change, it’s because the server-side developer changed the the name of the service, or the location. Whoop-pu-dee-doo… 1 line of code in either the class or your ServiceLocator. If you’re delegates/proxies use Factories to actually parse the server’s returned data (XML, JSON, AMF, etc.) then you’re even more insulated. Again, middle tier technology doesn’t really change in the middle of a project.

A data model change usually affects your entire application. For example, if you change the data structure of a Person object (PersonVO), suddenly your Factory changes, your VO’s change, any Controller classes modifying PersonVO’s (such as Commands in Cairngorm or Proxies in PureMVC, and potentially Commands as well), and any View’s that represent or edit them.

If you’re creating complicated View’s, whether based on a design comp with little detail, or it’s not a conventional GUI control, it will definitely change over time once someone uses it and gives feedback. Any View based on a list of dynamic data that needs to draw a bunch of children that represent a ValueObject, such as a repeater or a custom Chart will go through extreme refactoring; both modifications of item renderers and drawing performance improvements if you don’t extend List and do your own drawing routines.

View’s such as your main Application file, an optional MainView, a Login, and Menu’s do not change assuming you use 1 CSS file and straight forward skinning. Most Event and Utility classes just get added to; you don’t really change them, rather you add or remove class properties and/or methods, but their names and package structure stay the same.

For Cairngorm Commands, they just grow in scope as the development age of your application increases. Since PureMVC Commands delegate a lot of this Model modification off to Proxies, those Proxies tend to grow in scope as the complexity of your data interactions increase. They only get waxed or massively changed if your data model does. This doesn’t really happen later in the project.

The above is totally a case by case basis, but has been consistent on a lot of my projects. Your mileage may, and most likely will, vary.

The Con to Refactoring

There are a few cons to soo much refactoring. The first is, some clients don’t understand why you’re coding the same thing twice… or more, especially when Scrum is supposed to be about getting it done quickly vs. over-thinking it. In my experience, if you can speak intelligently at a high level, you can explain each refactoring part away. I can’t, so usually explain it to a project manager who’s capable of translating it in lamens terms to a client.

The second thing is that it makes merging on Merge Day a TON harder. You may have already refactored like twice the week before, and totally forgotten all the details of why you did. Suddenly, 4 days later (every other Wednesday in our case), you’re having an insanely hard time merging code from your branch(es) into trunk. This may require a long conversation with your team members, and you are struggling to remember why you made such massive code changes.

Even if you do remember, the other developer may feel a little frustrated if you didn’t invite them into the code refactoring change discussion for something you may at the time have felt was trivial. It probably was trivial, it’s just blown out of proportion now since merging is always stressful. Either that, or you just spend a few hours getting trunk working again. If I totally wax something, I’ll usually put a large drawn out code comment to explain why. Additionally, I’ll do the same thing in SVN check in comments.

The third thing is it’s a project manager’s nightmare. If she doesn’t have enough forewarning of these and their possible affect on not getting a single or set of user stories done by the end of the sprint, it can be a bad surprise. Communicating these during the daily standup meeting with potential ramifications is best. It can also make planning future sprints challenging as well. If your team has been chugging along at an average 12 points per sprint for 3 consecutive sprints, and suddenly in sprint #4 you spend 60% of your time refactoring, you’re clearly going to finish with a lot less points in user stories completed.

This sets the project manager up for failure. They cannot effectively communicate projected progress to the client, nor visibility into the current progress of the app since something that worked for awhile may suddenly break in the next UAT. You’re supposed to be completing user stories, not creating new ones that break old ones. Again, forewarning is the only thing I know immediately to do. I’m not sure what doing too much refactoring is a symptom of yet. Most so far on my current project, and past ones, have been for random reasons.

Conclusions

I really like how fast I can code some things in Agile. Other things have stayed the same, but the overriding goal of “get it working, but don’t write crap code” is such a high bar… and I love it. It’s the same speed of agency coding, only you know you’ll have to live with the code (aka potentially eating your own mess) so you end up producing better code than you would in an agency setting.

I also like either drawing on experience, or just making challenging inferences, on what to architect well, and on other parts to just get something working without too much thought. It’s nice to have the variety.

Finally, I’m not sure what to think of the refactoring. I like that it’s “ok” and an expected part of the process, but I feel that my project is unique in the amount I’m personally doing. My coworker for example isn’t doing as much as I’m doing at all; he’s chugging along on other user stories and is set to beat me, again, in point values for user stories completed at the end of this sprint. We’re really pushing the limits of Flash Player here, and only one section in this large app is really this challenging; the rest are your run of the mill Flex screens. So, it sounds to me like the “on average 30% of your time is spent refactoring per sprint” still applies. There is no way I’ll be refactoring this much on some of the easier sections in future sprints.

Stay tuned for #3 in the Agile Chronicles series where I talk about every developer using their own Branch in Subversion.

More Stories By Jesse Randall Warden

Jesse R. Warden, a member of the Editorial Board of Web Developer's & Designer's Journal, is a Flex, Flash and Flash Lite consultant for Universal Mind. A professional multimedia developer, he maintains a Website at jessewarden.com where he writes about technical topics that relate to Flash and Flex.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Roberto Medrano, Executive Vice President at SOA Software, had reached 30,000 page views on his home page - http://RobertoMedrano.SYS-CON.com/ - on the SYS-CON family of online magazines, which includes Cloud Computing Journal, Internet of Things Journal, Big Data Journal, and SOA World Magazine. He is a recognized executive in the information technology fields of SOA, internet security, governance, and compliance. He has extensive experience with both start-ups and large companies, having been involved at the beginning of four IT industries: EDA, Open Systems, Computer Security and now SOA.
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ...
Operational Hadoop and the Lambda Architecture for Streaming Data Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, representing a model of how to analyze rea...
SYS-CON Events announced today that Vitria Technology, Inc. will exhibit at SYS-CON’s @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Vitria will showcase the company’s new IoT Analytics Platform through live demonstrations at booth #330. Vitria’s IoT Analytics Platform, fully integrated and powered by an operational intelligence engine, enables customers to rapidly build and operationalize advanced analytics to deliver timely business outcomes for use cases across the industrial, enterprise, and consumer segments.
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
SYS-CON Events announced today that Open Data Centers (ODC), a carrier-neutral colocation provider, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Open Data Centers is a carrier-neutral data center operator in New Jersey and New York City offering alternative connectivity options for carriers, service providers and enterprise customers.
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
PubNub on Monday has announced that it is partnering with IBM to bring its sophisticated real-time data streaming and messaging capabilities to Bluemix, IBM’s cloud development platform. “Today’s app and connected devices require an always-on connection, but building a secure, scalable solution from the ground up is time consuming, resource intensive, and error-prone,” said Todd Greene, CEO of PubNub. “PubNub enables web, mobile and IoT developers building apps on IBM Bluemix to quickly add scalable realtime functionality with minimal effort and cost.”
Sensor-enabled things are becoming more commonplace, precursors to a larger and more complex framework that most consider the ultimate promise of the IoT: things connecting, interacting, sharing, storing, and over time perhaps learning and predicting based on habits, behaviors, location, preferences, purchases and more. In his session at @ThingsExpo, Tom Wesselman, Director of Communications Ecosystem Architecture at Plantronics, will examine the still nascent IoT as it is coalescing, including what it is today, what it might ultimately be, the role of wearable tech, and technology gaps stil...
With several hundred implementations of IoT-enabled solutions in the past 12 months alone, this session will focus on experience over the art of the possible. Many can only imagine the most advanced telematics platform ever deployed, supporting millions of customers, producing tens of thousands events or GBs per trip, and hundreds of TBs per month. With the ability to support a billion sensor events per second, over 30PB of warm data for analytics, and hundreds of PBs for an data analytics archive, in his session at @ThingsExpo, Jim Kaskade, Vice President and General Manager, Big Data & Ana...
In the consumer IoT, everything is new, and the IT world of bits and bytes holds sway. But industrial and commercial realms encompass operational technology (OT) that has been around for 25 or 50 years. This grittier, pre-IP, more hands-on world has much to gain from Industrial IoT (IIoT) applications and principles. But adding sensors and wireless connectivity won’t work in environments that demand unwavering reliability and performance. In his session at @ThingsExpo, Ron Sege, CEO of Echelon, will discuss how as enterprise IT embraces other IoT-related technology trends, enterprises with i...
When it comes to the Internet of Things, hooking up will get you only so far. If you want customers to commit, you need to go beyond simply connecting products. You need to use the devices themselves to transform how you engage with every customer and how you manage the entire product lifecycle. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, will show how “product relationship management” can help you leverage your connected devices and the data they generate about customer usage and product performance to deliver extremely compelling and reliabl...
The Internet of Things (IoT) is causing data centers to become radically decentralized and atomized within a new paradigm known as “fog computing.” To support IoT applications, such as connected cars and smart grids, data centers' core functions will be decentralized out to the network's edges and endpoints (aka “fogs”). As this trend takes hold, Big Data analytics platforms will focus on high-volume log analysis (aka “logs”) and rely heavily on cognitive-computing algorithms (aka “cogs”) to make sense of it all.
One of the biggest impacts of the Internet of Things is and will continue to be on data; specifically data volume, management and usage. Companies are scrambling to adapt to this new and unpredictable data reality with legacy infrastructure that cannot handle the speed and volume of data. In his session at @ThingsExpo, Don DeLoach, CEO and president of Infobright, will discuss how companies need to rethink their data infrastructure to participate in the IoT, including: Data storage: Understanding the kinds of data: structured, unstructured, big/small? Analytics: What kinds and how responsiv...
Since 2008 and for the first time in history, more than half of humans live in urban areas, urging cities to become “smart.” Today, cities can leverage the wide availability of smartphones combined with new technologies such as Beacons or NFC to connect their urban furniture and environment to create citizen-first services that improve transportation, way-finding and information delivery. In her session at @ThingsExpo, Laetitia Gazel-Anthoine, CEO of Connecthings, will focus on successful use cases.
Sensor-enabled things are becoming more commonplace, precursors to a larger and more complex framework that most consider the ultimate promise of the IoT: things connecting, interacting, sharing, storing, and over time perhaps learning and predicting based on habits, behaviors, location, preferences, purchases and more. In his session at @ThingsExpo, Tom Wesselman, Director of Communications Ecosystem Architecture at Plantronics, will examine the still nascent IoT as it is coalescing, including what it is today, what it might ultimately be, the role of wearable tech, and technology gaps stil...
The true value of the Internet of Things (IoT) lies not just in the data, but through the services that protect the data, perform the analysis and present findings in a usable way. With many IoT elements rooted in traditional IT components, Big Data and IoT isn’t just a play for enterprise. In fact, the IoT presents SMBs with the prospect of launching entirely new activities and exploring innovative areas. CompTIA research identifies several areas where IoT is expected to have the greatest impact.
Wearable devices have come of age. The primary applications of wearables so far have been "the Quantified Self" or the tracking of one's fitness and health status. We propose the evolution of wearables into social and emotional communication devices. Our BE(tm) sensor uses light to visualize the skin conductance response. Our sensors are very inexpensive and can be massively distributed to audiences or groups of any size, in order to gauge reactions to performances, video, or any kind of presentation. In her session at @ThingsExpo, Jocelyn Scheirer, CEO & Founder of Bionolux, will discuss ho...
SYS-CON Events announced today that GENBAND, a leading developer of real time communications software solutions, has been named “Silver Sponsor” of SYS-CON's WebRTC Summit, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. The GENBAND team will be on hand to demonstrate their newest product, Kandy. Kandy is a communications Platform-as-a-Service (PaaS) that enables companies to seamlessly integrate more human communications into their Web and mobile applications - creating more engaging experiences for their customers and boosting collaboration and productiv...
From telemedicine to smart cars, digital homes and industrial monitoring, the explosive growth of IoT has created exciting new business opportunities for real time calls and messaging. In his session at @ThingsExpo, Ivelin Ivanov, CEO and Co-Founder of Telestax, shared some of the new revenue sources that IoT created for Restcomm – the open source telephony platform from Telestax. Ivelin Ivanov is a technology entrepreneur who founded Mobicents, an Open Source VoIP Platform, to help create, deploy, and manage applications integrating voice, video and data. He is the co-founder of TeleStax, a...