Welcome!

Agile Computing Authors: Liz McMillan, Pat Romanski, Elizabeth White, Dana Gardner, Andy Thurai

Related Topics: Agile Computing, Containers Expo Blog

Agile Computing: Blog Feed Post

Give Your Unstructured Data the Meyers-Briggs

One of the Problems Currently Facing the Enterprise is to Properly Categorize the Data

For those who don’t know, according to the Meyers & Briggs Foundation, part of the Meyers-Briggs Assessment is defined as: The essence of the theory is that much seemingly random variation in the behavior is actually quite orderly and consistent… The same can be said about your data. Much that is seemingly random is consistent and predictable. One of the problems currently facing the enterprise is to properly categorize that data so that its “personality” is well known. You cannot sort (or tier) what you don’t know, and this is a simple proposal for how you might begin such a categorization.

No matter what your organization does, it has a variety of data in a variety of types with a variety of attributes that can be built into indices to help you understand not only what you have, but how much of it you have, what its relative importance is to the organization, and how you can make use of all of this information to help you move data about in an intelligent manner.

Meyers-Briggs uses initials to give your average Joe (or Jane) an easy-to-access summary of a given individual’s personality type, and by extension how to interface with that person. This little tool aims to do the same type of thing for your data, so I kept their initials and mapped them to information about your data that will help you figure out what to do with it. Perhaps a bit gimmicky, but it’s valid, so let’s get started defining your data

We can break data information into two categories – physical and extended. Physical information can be readily accessed and utilized by automated tiering systems like the one built into our own ARX, but extended information is unique to your organization and some of it will fluctuate over time. That is the hard part that interviews and intelligent data analysis will be required to determine. Not insurmountable, but certainly a task, and if you’re a geek that doesn’t like to play with “squishy” data, not an envious task at all. Though knowing this stuff will help you come to logical conclusions about where and how the data will be stored.

Physical Extended
Extension Interest
Timestamp Jurisdiction
Size Permissions
Filesystem Necessity

 

First, clear definitions of each data type.

  • Extension is the file extension. It will not only tell you type of file (generally), it will also tell you aggregate type. An AVI falls into the video category, for example. Yeah, it can be audio too, but most organizations treat the two media types similarly when making decisions strictly on extension.
  • Timestamp the last time that the filesystem shows this file as written. If you have a tool (like ARX) that allows you to accurately track last access date/time also, then you could use that information much more intelligently than last save time.
  • Size Let’s face it, the multi-gigabyte file is going to be treated differently than the 10K file just because it is a big win to get it off of tier one storage and on to something cheaper.
  • Filesystem Files on the SAN generally take more money to keep there than those on the NAS. Now if your SAN is low-end and your NAS high-end, it is possible that this is untrue for you, but either way, knowing what filesystem a file is hosted on helps you to understand what the impact of moving that file will be.
  • Interest How much interest would this file be to ne’er-do-wells that got access to the storage medium it is currently stored on?
  • Jurisdication Who is the ultimate owner of this data? The person who can make decisions about its use, distribution, and access rights?
  • Permissions Who has access to this file, and is it by user or group, is access to this file managed on the file itself, or the file system it is stored on?
  • Necessity How is this file used within the organization? If it went away tomorrow, who would be impacted and how would they be impacted?

The idea is to collect all of this information about your files so that you can make intelligent decisions about how to move that data around and store it in the most appropriate place. As I said above, there are tools to help you with the physical stuff, and some of them help with Permissions also. But you’ll still need to collect the other data, and that’s a lot of work. If you just plain don’t have time to interview director-level people about their team’s data usage and specific files, then start with directories. Something is better than nothing after all, and behaviorally most groups put like data into folders as far as usage, permissions, jurisdiction, and necessity. After all, the fantasy football spreadsheet isn’t generally stored in the new product development folder.

Using these values, you can properly categorize your data, which is the first step to both understanding it and organizing it – and tiering it.

Unlike Meyers-Briggs, these attributes can have multiple non-numeric values, so your tracking will be a little bit more complex than a Meyers-Briggs score, but it will be highly valuable in helping you figure out what to do with your data. Data whose necessity is high will obviously take pride of place on your tier one storage systems – unless it is almost never accessed, which the better version of timestamp could tell you.

If you just don’t have the manhours, cooperation, or desire to work through all of this, then invest in an automated tiering product, let it learn, and turn it on. It will get you 50% there, maybe 5/8ths of the way there, with no significant effort on your part. You’ll have to install and configure it, and monitor it… But the investment is small compared to interviewing business owners and asking them to make definitive statements about all of the data they own. And it gets you started.

In the end, you can’t send stuff that is of high interest unprotected into the cloud, you can’t run stuff that is frequently accessed into an archival format, and you want to check how many movies and audio files you have, where they’re stored, and how much space they take up. So the more you know, the more power over your storage environment you will have.

Meyers-Briggs is a trademark of the Meyers and Briggs Foundation.


Follow me on Twitter icon_facebook

AddThis Feed Button Bookmark and Share

Related Articles and Blogs:

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
The deluge of IoT sensor data collected from connected devices and the powerful AI required to make that data actionable are giving rise to a hybrid ecosystem in which cloud, on-prem and edge processes become interweaved. Attendees will learn how emerging composable infrastructure solutions deliver the adaptive architecture needed to manage this new data reality. Machine learning algorithms can better anticipate data storms and automate resources to support surges, including fully scalable GPU-c...
Machine learning has taken residence at our cities' cores and now we can finally have "smart cities." Cities are a collection of buildings made to provide the structure and safety necessary for people to function, create and survive. Buildings are a pool of ever-changing performance data from large automated systems such as heating and cooling to the people that live and work within them. Through machine learning, buildings can optimize performance, reduce costs, and improve occupant comfort by ...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...