February 15 2023

Agenda

We are working on it… please keep checking often!

10:00 AM – 10:20 AM (PST):
Registration

Meet your fellow Leaders, visit Exhibitors

10:20 AM – 10:35 AM (PST):
Richard Greenberg

Welcome Address

10:35 AM – 11:20 AM (PST):
Ira Winkler

Opening Keynote
TBD

11:20 AM – 11:40 AM (PST):

Break and Vendor Expo

11:40 AM – 12:40 PM (PST):
TBD

Roundtable Discussions:
“TBD”

12:40 PM – 2:00 PM (PST):

Lunch and Vendor Expo

2:00 PM – 2:45 PM (PDT):
Moderator: Richard Greenberg

2:50 PM – 3:35 PM (PDT):
Javier A. González

Talk:
TBD

3:40 PM – 4:25 PM (PST):
Ross Delston

Talk:
TBD

4:25 PM – 4:45 PM (PST):

Break, visit Exhibitors

5:35 PM – 6:20 PM (PST):
Malcom Harkins

Closing Keynote:
“TBD”

6:20 PM – 6:30 PM (PST):
Haral Tsitsivas

Closing Remarks

6:30 PM – 8:30 PM (PST):

Happy Hour

Talk Descriptions:

We are working on it… please keep checking often!

Horizon Level Room

2:25 PM – 3:05 PM (PDT)

“Highway to the Logger Zone: Enabling High Speed Big Data Analytics with a Multi-Terabyte Logging Pipeline Strategy”

Gal Shpanter

CISOs are being inundated with requests to exploit telemetry from old and new log sources, not to mention old and ‘new’ ideas about what to do with those logs. While most of this intense marketing is focused on ‘helping’ you make decisions on which techniques and tools will help you search and analyze the logs (ML/DL/AI, ELK/Splunk/Backstory/Sentinel/etc), very little attention is paid to the critical but non-sexy plumbing that gets the logs from their sources to the different tools that use those techniques (the sexy stuff…)
Even a remotely realistic PoC for a new analytical platform can be a daunting task, since these logs over here have to get to that platform over there… in the right format/schema/latency appropriate for that particular test case, in addition to where they currently need to be.
This talk focuses on the fundamental plumbing problem, and answers the following questions at a management level, with key Dos and Dont’s for each of these questions that you can take back to your org next week. You can benefit from this talk without having to know the technical difference between syslog and a distributed commit log:
• How do I estimate the size of this effort? Gigabytes become terabytes, terabytes become petabytes… faster than we’re ready for them. What is a realistic approach to getting the most out of your current logs: Capturing them in a scalable and forward-compatible pipeline, analyzing and transforming them in real time, then distributing them to where they need to go?
• How do you onboard new sources to get business value out of previously unexplored logs?
• How do I future-proof my logging strategy, so that if I need to add/remove/upgrade analytical and storage products and services, I’m not stuck re-building the logging infrastructure before I can benefit from those changes?
• How do I get my CTO/CIO/CFO colleagues to work with me on this logging strategy? What do they get out of this?
• How do you reduce MTTD/MTTR with a logging strategy that enables real-time work, while also enabling long time-horizon batch analytics and cold storage for DR/BCP?
• How do I get cybersecurity value out of non-‘cyber’ sources by leveraging this logging strategy?
• How do I save money on the ‘water meter’ costs that many analytics platforms charge, so that I’m paying for a good signal-to-noise ratio and not just shoving a lot of useless information into an expensive tool? (FYI: You pay for this noise three times: Ingestion-point water meter costs, storage, and query performance).
• What is the order of operations involved in terms of hard dependencies vs parallel work, so that you can minimize time-to-value, while preserving your future options and avoiding vendor lock-in?
• What tools are available for on-prem and cloud environments?