

If NONE, everybody just keeps waiting, doin’ the neutron dance. If SELF, your index rebuild will give up and let user queries keep going. If BLOCKERS, then SQL Server will abort (kill) the queries blocking your index rebuild. ABORT_AFTER_WAIT = BLOCKERS – can be a few different variables.MAX_DURATION = 5 – wait for up to 5 minutes.WAIT_AT_LOW_PRIORITY – just hang out when you need the schema mod lock.PARTITION = 3 – you can pick the specific partition you want to rebuild, and you can do it online.WITH (ONLINE = ON (WAIT_AT_LOW_PRIORITY (MAX_DURATION = 5, ABORT_AFTER_WAIT = BLOCKERS))) There’s also new Extended Events stuff you can use to monitor who’s getting blocked and killed. For 24/7 workloads, this gives the DBA the ability to do maintenance with lower locking, CPU, and memory overhead. More online maintenance operations. Got big data in a partitioned table? Is nobody giving you any time to do maintenance? Just no time to stop and get away cause you work so hard to make it every day? Well, with SQL 14, you can rebuild a single partition’s index online, and you can switch partitions in/out using DBA-specified lock priorities. If the answer to all of those questions is yes, an SSD buffer pool extension may be for you. Honestly, Microsoft could stop there and I’d probably still recommend the new version for most of my clients, because that’s a killer performance benefit. Does your server have room for locally attached PCI Express or SAS/SATA solid state drives?.Do business requirements force you to use shared storage or magnetic local storage? If not, consider moving the data to local SSD entirely.Have you already maxed out the memory on the server? If not, start there first – memory can be used for more than just caching clean pages.Is your total actively queried data set bigger than you can fit in memory? Note that I didn’t say all data: you might have archive or history or audit tables in your databases that are never queried, and there’s no sense in caching those.Here’s the questions you’ll want to ask before you use this feature: SSDs are cheap, and they’re only getting cheaper and faster. This works with local SSDs in clusters, too – each node can have its own local SSDs (just like you would with TempDB) and preserve the SAN throughput for the data and log files. (Only clean pages, not dirty pages, are stored there.) The best use case is for read-heavy OLTP workloads. SQL Server 2014 will automatically cache data there with zero risk of data loss. You can specify an SSD (or an SSD array) to be used to extend memory. They haven’t announced the licensing/pricing, but I’ll tell you what I do know so far.įirst, open this in another tab and hit play so you’ve got some background music while you read. Done with the commercial? Okay, let’s get to it:Ĭache frequently used data on SSDs. Just when you thought SQL Server couldn’t get better, Microsoft is announcing the features for SQL Server 2014.
