Parenting: Time is of the Essence

How much time is possible to spend with your children a week?

Let's get right to it...Less than 40 hours if you are a working parent. This is even a high estimate!
There is research out there that says most families spend only 10 hours of face time together a week... 

I am basing the numbers off the ability to see your children whenever you want. Therefore, this is for all parents. It is just more difficult to see the numbers for a parent who does not have that full legal right...


I want to preface, for many you are doing what you can. Due to various circumstances you may not be able to make certain changes in your life that would give you the luxury to spend more time with your children. This is not to make parent's feel guilty, there is enough of that with super parent blogs and social media. We all know a lot more goes on behind the scenes than what appears on Facebook or Instagram and it's not reality. 

The "Math"

There is a lot that you could factor in, but we are taking into consideration the basics: such as routines, work hours, recommended sleep hours for children of a specific age, etc..
Also, to help show the point this is not even factoring in taking time for your self, watching a show, using the restroom, changing a diaper, and so on..... Which would obviously make the possible time even less..
I have two children, 3 yrs and 5 yrs old, so we will use the numbers of hours of sleep based on those ages.

Work Week

  •  8 hours = work day
  • 42 minute = commute, round trip
    • many have a higher commuter average. For example Northern Virginia can reach up to 40 minutes one way!
  • 30 minutes = morning routine
    • Getting the child breakfast, dressed, and out the door.
  • 11 hours = child sleeping
    • around the bare minimum for 2-5 years old. some may be 1-2 hours higher or lower
  • 30 minutes = Dinner / arriving home
  • 30 minutes = Bed time
per day:
21 hours and 20 minutes on routines.
2 hours and 40 minutes for time.

possible time per work week, Monday-Friday:
13.4 hours

Weekend Week

    • 30 minutes = Morning
    • 11 hours = child sleeping
    • 30 minutes = Dinner
    • 30 minutes = Bed time
    23.5 hours a weekend, Saturday-Sunday

    Possible time
    37 total hours per week (rounding up)
    If your children are of older ages with less "doctor" recommended sleep - then you are looking between:
    38-46 hours a week.

    Take Away

    Your children will remember you by the time they spent with you...Not that one of their friend's got better cupcakes for their birthday than what you gave them. Not what kind of career you had. Not the things you bought them. 

    Even if you have limited time with your children, remember that you can always spend quality time. Quality time can be just as or more important than quantity.

    Therefore, change what you can to increase your time with your children... or just as well, change what you can so that the time you spend with your children is more qualitative (ie: less stressful job).


    average commute:
    26.4 minutes - US
    21.6 minutes - Columbus Ohio

    26.7/29 minutes - Arlington Virginia
    26-42 minutes - NoVA https://www.insidenova.com/news/transportation/10-northern-virginia-commute-times-ranked/article_a06df730-bd90-11e8-a119-f70ed2e6f14a.html


    5 hours face to face with kids - https://www.studyfinds.org/modern-family-average-parent-spends-just-5-hours-face-to-face-with-their-kids-per-week/

    10 hours in a work week on tasks before starting work - https://www.scarymommy.com/survey-kids-ready-extra-work-day/



    Elastic Wildcard ECS Whirlwind

    Elastic Common Schema is rolling back the wildcard data type (security use case searching savior) from ECS 1.8...

    reference 1 https://github.com/elastic/ecs/issues/1233

    reference 2 https://github.com/elastic/ecs/pull/1237

    For prior reading on wildcard data type, keyword data type, text/analyzed fields, case insensitive & case insensitivity searching on Cyber security related data/logs, all the while with/around/using Elastic Common Schema and logging use cases:

    I noticed in reference 2 github comment that Elastic discovered "some notable performance issues related to storage size and indexing throughput that we must have time to review and address in a comprehensive way".

    Right..... indexing things increases storage VS storing the thing as is. It's a 1 x $IndexTerms.. UNLESS you get good compression ratios. Usually good compression comes at the cost of a CPU resource whether client, server, index, or somewhere else (more on that later ;)
    However, compression and ultimately reduction in storage was a huge thing that Elastic touted in their big announcement of the wildcard data type.. 

    As I started digging further down the rabbit hole of why Elastic decided to rollback wildcard in 1.8 given that case sensitive log/search bypasses has been well documented and communicated for almost 2 years now....

    I noticed a very peculiar comment on the PR to Lucene that added the sauce (code) for making this compression for the wildcard data type better... The comment goes "There's a trade-off here between efficient compression (more docs-per-block = better compression) and fast retrieval times (fewer docs-per-block = faster read access for single values)"

    OK... It should be pretty clear, but look also at the wording of many of the PRs... You will see things like "most cases" or "if" data is similar or different.
    In short, IT DEPENDS...
    There is trade off's in databases as a whole, let alone sub components of them. Whether it is elasticsearch, some SQL db, you name it..

    I just want to know how we got here. How did we mess up the ability to do a search for does a value contain XYZ..regardless of upper/lower/space/etc..
    Elasticsearch could always do that before. Side bar...Yes, the analyzed field was not perfect for security use cases, but it was there and easier to work around its shortcomings than the situation the cyber security community is in now (mostly that nobody knows their searches are not returning the results they expect).. 
    The company could have just created a community analyzer like that neu5ron person.. I think he even worked there too at one point ;) 

    Even if wildcard data type had fixed everything by now, you still lose other powerful aspects of searching in Lucene (elasticsearch backend).

    Such as fuzzy/Levenshtein distance, term/ordering queries... so on and so forth... The things that are/still useful for security use cases. The things elasticsearch as a whole is useful for in most/all use cases let alone cyber.

    Can somebody at Elastic tell me what was wrong with keyword (data type).. Set the doc values of 10,000+ (32,000ish even better ;).. global ordinals.. and create a custom text analyzer. Whats this wildcard data type get us that is so special it needs it's own brand new data type.. and needed to be licensed before the great big license change even happened.

    We would have solved the vast majority of the issues by now (free text search :) kept the other searching functionalities.. less template/mapping changes... everybody roasting marshmallows and searching for bad folks on their networks. 

    The only explanation(s) I can think of for why the wildcard data type fiasco occurred:
    Was to decrease "storage" for licensing/purchase cost...
    or perhaps some Amazon debacle - because the wildcard data type had become licensed (before even the big license change situation).
    It also does not help.. if there is nobody within or empowered within Elastic's organization who is (what I like to call) a "glue person". This would be somebody that transcends multiple aspects of the business and use case.. In this example, somebody who knows the security use cases, backend/lucene (even a small amount is all that would have been needed), actively or recently deployed in a production environment, maintained a deployment, AND most importantly using it like an analyst would use the data..and works with the cyber community.

    But lets think for a second.. Storage is one of the cheapest computing resources there is (vs CPU/RAM).. 

    So then what..?!
    This is where it all gets muddy... Perhaps increases in storage was such a big deal because there is a bigger pricing issue.. A catch 22 where they shoot themselves in the foot and come in at a higher cost than anybody would expect because having to license more nodes (based on that additional storage)..
    NOT TO MENTION... shooting themselves in the foot when moving a lot of the parsing/ECS stuff to elasticsearch "ingest" nodes that are a licensed node... Compression overhead = more compute.. more compute = more licensed node..more licensed nodes = more license cost...
    or this is an genius evil business model :) 

    However, I don't think that storage increase is the real cost factor if it is done realistically. I think that this is a cloud storage licensing model issue.. Combined with what I think is the biggest thing, which is some religious (sales) document out there that says X amount of TBs per X amount of (licensed) nodes "NO MORE NO LESS"... and those numbers are pretty unrealistic I would assume. 
    Because, after X amount of days of immediately available (HOT architecture) data where there is an overlap of write & read at the same time - its not a huge concern to have much larger disks for a single server/resource-unit.......

    As it still stands, I am completely uncertain what the need for wildcard data type was.