This post is a continuation of my last post, PASS Summit 2017 Recap – PreCon Sessions and focuses on days 1-3 of the conference.
The next few days at PASS Summit were jam-packed with sessions, key-note speakers, vendors, and networking events. I tried picking the sessions that would give me the most benefit by attending in person versus watching the recording afterward. Given that there are tons of sessions and hours of recordings, I think it’s safe to say that I won’t watch every session even after I receive the recordings. The next few sections will highlight my key takeaways from the sessions that I attended.
Day 1 Keynote Session
The keynote session was filled with people waiting to hear from Rohan Kumar and watch some live demos on SQL Server 2017 and Power BI.
Some of the features highlighted for SQL Server 2017:
- Advanced Machine Learning with R and Python
- Support for graph data and queries
- Native T-SQL Scoring
- Adaptive Query Processing and Automatic Plan Correction
You can read more in the release notes.
Other technologies/features mentioned in the keynote session:
- SQL Server support for containers
- Managed Instances (currently in private preview) – allows “lift and shift” of SQL Server environments into the cloud
- Azure Data Factory – there is now a V2 version out, which addresses a lot of the shortcomings when trying to use ADF as a replacement for SSIS
- Azure Machine Learning Workbench
- SQL Operations Studio (Code name Carbon, not yet publicly available) – coming out in a few weeks
Day 1 Sessions
I attended a good session on I/O monitoring by Jimmy May. I’ve ran into several clients where they are experiencing slow performing applications and I find myself scrambling for ways to identify what the bottleneck is and what they can do to resolve their performance issues. This session was really helpful in highlighting the important counters to look at and how to interpret their outputs.
The 6 Counters that you need to look at for I/O:
- Latency – this is the most important
- IOPs
- Throughput
- Transfer Size
- Disk Queue Length
- Capacity
In addition, Jim recommended looking up an article by Paul Randal where he shows how we can use wait_stats to view resource wait times, and exclude those that are benign. I looked it up and you can find his article here (great stuff!).
Some additional notes from this session:
- Use virtual file stats to get IOPs info. You can calculate the virtual file latency by dividing the Stalls by I/O Count. Two great resources from Paul Randal again: Capturing IO latencies for a period of time, and How to examine IO subsystem latencies from within SQL Server.
- Always validate disk I/O subsystem components. Test the raw throughput by using DiskSpd (Using DiskSpd).
- When viewing I/O results, compare them to what the vendor specs specify
Some slides from the presentation (the entire presentation should be available in ~6 weeks. I’ll update this post with a link to it then):
In the afternoon, I went to check out Adam Saxton (Guy in a Cube) and Patrick LeBlanc for their Guy in a Cube Unplugged session, followed by Implementing a Complete Business Intelligence Solution in the Cloud, and Make Power BI Your Own with the Power BI APIs. The day was wrapped up with some networking and vendor learning at the exhibitor reception.
Day 2 Sessions
The second day I checked out the BI DevOps session by Jens Vestergaard, followed by Data Modeling and Design for New Features in SQL Server and Cosmos DB by Karen Lopez, and Ultimate Security and Sharing in Power BI by Reza Rad.
Lots of great tips and advice from all of these speakers, too many to list out here!
Day 3 Sessions
The last day was a half day for me as I had a plane to catch out of Seattle at 3:30 PM, but I still made the best of it. In the morning I started with DAX Best Practices by Marco Russo. This was a nice refresher on DAX and Marco showed some interesting insights into the good and the bad ways of writing DAX. He’s a fun guy to listen to and I’m glad I attended his session even though he was having issues with a couple of the demos (he blamed it on Alberto for not being able to get him coffee from Starbucks 🙂 ). The second session was DAX Optimization Examples by Alberto Ferrari, and it built on what Marco had presented earlier. I learned about the Storage Engine and the Formula Engine, using DAX Studio to run queries and create benchmarks, and how to troubleshoot poor performing DAX queries (for more info, check out this whitepaper on how to optimize DAX queries by Alberto). This was another one of my favorite sessions and I can’t wait to get the recording so I can re-watch it and pick up on some of the things I didn’t have time to write down.
The last session of the day for me was Creating Enterprise Grade BI Models with Azure Analysis Services or SQL Server Analysis Services by Christian Wade. This was another great session that showed some of the new functionality available with SQL Server 2017 for Analysis Services. Christian showed the new Object Level security, and did a great demo on how IT can integrate a model developed by a Power User into an existing Sanctioned model using BISM Normalizer. This tool allows you to do a schema compare between SSAS Tabular models, so you can easily append new tables to an existing model. He also showed how you can import a PBIX file into Azure Analysis Services and create a solution file from it. While this is only available in Azure, his advice was that you could turn this on only to import the model and then pause the server, thus minimizing the number of resources / cost associated with it.
Lastly, Christian showed how to control the Drillthrough behavior in a Tabular model by adding a DAX query in the Detailed Rows Expression property.
This is currently only available when viewing data through Excel and right-clicking on a value and selecting “Show Details.” Here’s a blog post by Marco on how to write DAX expressions for the Detail Rows property.
It’s a great improvement from previous versions, in which the same action showed all the columns associated with the table where the measure resided, which was not always helpful. Christian also hinted at this functionality coming to Power BI, which would be really awesome!
Until Next Year
That’s it, that’s the recap for this year’s PASS Summit! I am glad I was able to attend. I met lots of great people that are as passionate about BI and SQL Server as I am, was able to add some new tools to my tool belt, and had an overall awesome experience in Seattle. I’m looking forward to next year and hope to see some familiar faces again!
Feel free to share your thoughts in the comments below. Cheers.