In the current test, I expect to generate 1000 rows (ie: Line ID) with about 80 or so columns. I have various tables where each record that are linked together by a Line ID. I need to use dynamic SQL here to do a pivot here as the number of columns in the final result set is dynamic/can change from depending on which “GroupID” we are looking at.
It is however taking close to 30 seconds to execute the query. I have both tried creating a CLUSTERED/NONCLUSTERED INDEX after each temp table is created and that did improve the speed.
Just before the pivot is executed using dynamic SQL, I have merged all the temp table into 1 using a UNION ALL. I have also indexed that temp table.
(I have the Execution Plan saved if it can help, but will see how to attach it here.) I assume we just need to look at the entries that have a big percentage relative to the batch. Is there anything else I can change/add/tweak in order to make this go faster? I heard View can speed things up, but I’m not sure how those workt nor know if they would help since the columns on each result set can vary and I heard we cannot pass parameters to them to customize them ?.
I’m not sure how to add files here, but I have uploaded the execution plan here:
https://filebin.net/xf1dtenx70gh7ykr
What RDBMS are you using?