Agile Computing Authors: Yeshim Deniz, Valeriia Timokhina, Liz McMillan, Elizabeth White, Jason Bloomberg

Related Topics: @DXWorldExpo, Agile Computing, @CloudExpo

@DXWorldExpo: Article

Finding the Right Little Data | @CloudExpo #BigData #ML #InternetOfThings

Even with the great strides technology has taken, data quality remains a tremendous challenge for genealogy researchers

Over the years, one of my favorite pastimes has been working on the family genealogy. I first started work on it in the early 1990s. At that time, records were not digitized and research involved going to libraries, newspapers, and various local, state, and federal archives. There one would have to sift through reams of old records, documents and microfiche. If you were lucky, someone had created printed indices of the information contained in those documents. These indices, when they existed, were based on manual transcription of the data, and prone to the data quality. This is inherent in transcribing information from what frequently were old handwritten documents. The challenge was then trying to find those nuggets of information that would relate and connect to individuals you were trying to locate in your research.

For anyone operating in the Big Data space of today, this may all sound very familiar. Genealogists, amateur and otherwise, have been dealing with the challenges of Big Data, unstructured data, and data quality long before the terms became technology buzzwords. There are tools and products today to help; who hasn't seen the commercials for ancestry.com? However, even with technology, there are still challenges. These challenges are not unique to genealogy, and make a good lens for viewing and discussing the business needs in general for Big Data. Let's take a closer look at some of them.

Data Quality
Even with the great strides technology has taken, data quality remains a tremendous challenge for genealogy researchers. Let's take U.S. Census data as an example. Every 10 years, the U.S. government conducts a census of the population, and the results of these censuses become public 72 years after they are taken (the 1940 Census material just recently became available). U.S. Census data is a gold mine for genealogy research.

Below is a sample from the 1930 Federal census. Census forms were filled out by individuals going door to door and asking the residents questions. However, there were a number of data quality factors you must take into consideration. The sample here has fairly good quality handwriting, although that's not always the case. Also, you are constrained by the census takers interpretation of the person's answers and pronunciation. For example, this could result in different variations on the spelling of names.

When this document gets transcribed, you could still have multiple sources of problems with the data quality. The original census taker could have written it down incorrectly or the person transcribing it could have made a transcription error.

This challenge is not unique to genealogy research. Data quality has been an issue in IT systems since the first IT system. In the world of Big Data, unstructured data (such as social media), and things like crowd-sourced data, can become a daunting challenge. As with any challenge, we must understand the impact of those issues, the risk, and what can be done to mitigate that risk. In my above example, Ancestry.com takes an interesting approach to the mitigation. Given they have millions of records based on scanned documents, checking each one is beyond reasonable expectations. Given that, they crowd-source corrections. As a customer, I locate a particular record for someone I am looking for, that little data in all the Big Data. If I notice there is some type of error I can flag that record, categorize the error and provide what I believe is the correct information. Ancestry will then look at my correction and, if appropriate, cleanse the transcribed data.

Data Pedigree
Even though we are discussing genealogy, data pedigree is not about the family tree. Data pedigree is ‘Where did the data comes from?' and ‘How reliable is that source?' If, as an organization, you own the data, that is not an issue. In today's Big Data world, many sources are outside of your direct control (unstructured social media data, crowd-sourced data). For genealogy research, data pedigree has always been an issue and concern. A date of birth is a lot more reliable from a town birth record than from say the census example above, where the information is ‘the age on last birthday, as provided in the interview' (I have seen variations of multiple years from sequential census forms for an individual). In my Ancestry.com example again, as well as source records, Ancestry members can make their research available for online search and sharing. When using others' data (i.e., crowd sourcing research), one must always feel comfortable with the reliability of the source. Ancestry allows you to identify what your source of information was, and can identify multiple sources (for example, I may source data of birth based on both a birth record, a marriage record, and a death record). That information is more reliable than a date of birth with no source cited. When I find a potential match (again, that little data I am truly looking for), I can determine if it truly is a match or possibly a false correlation.

Similarly, in any Big Data implementation, we must understand the pedigree of our data sources. This impacts any analytics we perform and the resulting correlations. If you don't, you run the risk of potentially false correlations and assumptions. For some entertaining examples of false correlations check out www.tylervigen.com.

Finding That Gem of Little Data in the Huge Oceans of Big Data
The ultimate value of Big Data is not the huge ocean of data. It's being able to find the gems of little data that provide the information you seek. In genealogy, it is wonderful that I have millions of public records, documents, and other genealogy research available to sift through, but that's not the value. The value is when I find that record for that one individual in the family tree I have been trying to find. Doing the analysis, and the matching of the data is very dependent on the challenges we have been discussing, data quality and data pedigree. The same is true for any Big Data implementation. Big Data without good understanding of the data is just a big pile of data taking up space.

No technology negates the need for good planning and design. Big Data not just about storing structured and unstructured data. It's not just providing the latest and greatest analytic tools. As technologists we must work with the business plan and design how to leverage and balance the data and its analysis. Work with the business to ensure there is the correct understanding of the data that is available, its quality, its pedigree and the impact of those. Then the true value of the Big Data will shine through as all the gems of little data are found.

This post is sponsored by SAS and Big Data Forum.

More Stories By Ed Featherston

Ed Featherston is VP, Principal Architect at Cloud Technology Partners. He brings 35 years of technology experience in designing, building, and implementing large complex solutions. He has significant expertise in systems integration, Internet/intranet, and cloud technologies. He has delivered projects in various industries, including financial services, pharmacy, government and retail.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@ThingsExpo Stories
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Internet-of-Things discussions can end up either going down the consumer gadget rabbit hole or focused on the sort of data logging that industrial manufacturers have been doing forever. However, in fact, companies today are already using IoT data both to optimize their operational technology and to improve the experience of customer interactions in novel ways. In his session at @ThingsExpo, Gordon Haff, Red Hat Technology Evangelist, shared examples from a wide range of industries – including en...
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Rodrigo Coutinho is part of OutSystems' founders' team and currently the Head of Product Design. He provides a cross-functional role where he supports Product Management in defining the positioning and direction of the Agile Platform, while at the same time promoting model-based development and new techniques to deliver applications in the cloud.
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
delaPlex is a global technology and software development solutions and consulting provider, deeply committed to helping companies drive growth, revenue and marketplace value. Since 2008, delaPlex's objective has been to be a trusted advisor to its clients. By redefining the outsourcing industry's business model, the innovative delaPlex Agile Business Framework brings an unmatched alliance of industry experts, across industries and functional skillsets, to clients anywhere around the world.
Headquartered in Plainsboro, NJ, Synametrics Technologies has provided IT professionals and computer systems developers since 1997. Based on the success of their initial product offerings (WinSQL and DeltaCopy), the company continues to create and hone innovative products that help its customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business or per...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...