So I spend a lot of my time working with what’s wrong iwth Splunk or what could be improved. If its easy and solved, then we knocked it out and move on.
One of the problems I see is its hard to take different data sources and turn them into a path; while I haven’t played with graph databases I’ve started reading up on them as I think they could be the next logical step.
So a maturity level of 0 is your sending data into Splunk. You then start adding knowledge extractions. You can then pull data via a search into the graph database (similar to how data models work now). Once you feel that you have reached a maturity level of 1, you can add a task to the idnexing pipline which parses the events and populates the graph database.
The graph database shoudl still have a time component. Like data models, you might not retain data for as long (there should be a way to pull data in from a search if it isn’t already there, similar to an accelerated data model).
Other thoughts: While I was very excited to see Hunk 6.1 include support for Neo…. it raises some interesting questions. With Hunk 6.1, you incurr an additional license cost, yet db_connect is free and let’s you add context without inccur additional licensing costs. If it were just adding graph databases, I could understand, but it seems like all No SQL access is being added in Hunk and thus has the additional license cost associated with it.