Hi @Kamil, your video has saved tons and tons hours of searching , watching and understanding. Wish If I could give you the highest award for this topic :). Its highly recommended and my entire team is referring your videos. Superb and very well explained. @Kamil, Thank you again
Hi @Kamil. Thanks for the wonderful presentation. Is there a particular approach you would suggest to deploy changes only for a couple of tables/objects? I'm still to understand how current data doesn't get affected by new builds and releases. Any light you can share would be great. Again. Thank you!
Hi @Mr Ant. You can apply one trick, but generally, you should aim to deploy the entire database project (the latest, master version). Please make sure you understand the whole concept of SSDT and its deployment - it does deploy only differences, so one step before it happens is the process of comparing source to target. Check out the following free lesson in my course: learn.sqlplayer.net/view/courses/ssdt-essentials/981098-module-4/2932061-introduction
Hi @Kamil. Thanks for the wonderful video. Can you please advice how to do multiple database deployment using CI CD. In my case, The DB Server has around 700 databases. Thanks
Hopefully not all of those 700 database are related to each other (cross-database queries). You need to multiple the same mechanism I showed in this video for each database. If you have group of databases with cross-database references - put them all in one solution. If you have specific concerns - please raise it here, so we can discuss further: github.com/NowinskiK/ssdt-training/discussions
@@KamilNowinski , Thanks for the details. In my case all the 700 databases are of the same structure. All we need to do is to update the Stored Procedures / VIEWS / FUNCTIONS /TABLE in an incremental manner. I'm going to start my work and will let you know if I have an questions. Again, Thanks!
Very good information and session thanks a lot. I am looking for the same using Jenkins could you please have the one using Jenkins will be greatly helpful for me. Thanks in advance .
Janam, one of my friends, Gavin, wrote this article a few years ago which might help: gavincampbell.dev/post/jenkins-windows-git-ssdt-profit/ However, if there is no ready-to-go task for publishing DACPAC, you can use PowerShell task to execute SQLPackage.exe to do this. I explained this in the following lesson of my course: learn.azureplayer.net/view/courses/database-projects-with-ssdt-dacpac/734805-module-6/2417631-publishing-with-sqlpackage-powershell The above lesson is free! HTH. Enjoy!
Great explanation , Thanks for taking to time put in all the remedial steps, generally where I get frustrated. I got as far as generating the script in VS I want to publish(59 min) , Repo version has a view with a comment --TEST, I have an existing DB in Azure I want to update with the change (Alter view with a comment --TEST,) , when I generate the script, no code is generated to make the update but if I was to create a new DB all object code is created, what am I doing wrong? I want to update existing DB's with new or altered objects
Hi @Kamil. Is it possible to use YAML (I'm using Azure Data Studio as my source control/branch manager into an Azure DevOps remote repo, along with build/deployment pipes) for those builds and deployments? I set up a lower environment with Red-Gate SQL Source Control and SQL Change Automation, but I want to try out SSDT using ADS and try to build an automated build/deployment pipe to compare workflows. I'm kinda liking ADS more than the red-gate solution, even if not as intuitive as the RG solution, but was hoping you'd have used YAML for your demo. I guess for POC purposes the Classic is enough to get started. Which would you recommend and why? Thanks so much for your presentation!
Hi Ryan, Classic UI for POC is absolutely enough. The differences between state-based vs migration-based approaches are huge, hence I'm not going to describe them here. My friend, Alex Yates, made an excellent comparison though which you can read here: github.com/NowinskiK/ssdt-training/blob/master/docs/Redgate-vs-SSDT-4.pdf I hope this will help.
The concept is exactly the same, but you need to repeat to against n databases: 1) build (generates DACPAC) 2) generate migration script (against target database), 3) Execute migration script 4) Repeat for next database. Did I miss something?
Kamil, Very informative and thanks for sharing this. I have a solution which has multiple databases in the same solution and as such need separate dacpac files created for each database. How can that be possible in the Azure DevOPs CI portion?
That's absolutely not a problem. Your solution contains a few separate databases and the build of the solution file will build all projects in it. Each project has its own dacpac file, so having 4 databases in solution you'll get 4 dacpac files.
@@KamilNowinski Thank you for your prompt response. You are right, when I build my solution manually in VS, it creates separate dacpac files in the respective bin folders. The issue I am facing is that I am using the same Azure DevOps CI as your video. The process uses system variable, and in my case root folder solution and as a result one dacpac called root-folder. How do I publish 4 dacpac files from CI pipeline in Azure DevOps for this one solution? Thanks in advance.
I am getting this error 2023-02-27T09:39:37.6420094Z ##[error]*** Verification of the deployment plan failed. 2023-02-27T09:39:37.6483928Z ##[error]Error SQL72031: This deployment may encounter errors during execution because changes to [***] are blocked by [db_ddladmin]'s dependency in the target database.
How do we manage logins and users permission when deploying multiple environments? For example: if we have users User1 & User2, User1 should not be present on Dev and User2 should not exists on UAT?
Check these two examples from my commercial course (code is free): github.com/NowinskiK/ssdt-training/blob/master/src/Variables/ContosoRetailDW/Deployment/Environment-specific.sql github.com/NowinskiK/ssdt-training/blob/master/src/PrePostDeployment/ContosoRetailDW/Scripts/CreateUsers.sql If you interested in the whole course: learn.azureplayer.net/ssdt-essentials
If I create a new view and pushed the update to repo and pipeline deployed that changes to target DB and if I revert that new view commit, will the release pipeline will delete that view or else need to create a new script in ssdt for droping that view.
@@KamilNowinski yes, it builds fine locally, but via agent it can't find the path "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets" was not found" So i use msbuild and it works fine. Maybe it's related to the enterprise version or something else, visual studio is not the most stable product, hehe
@Kamil, thanks for the video really helpful. It works fine, but when I delete a table in DEV it doesn't delete it in prod. I created in VS SSDT a publish.xml which states : True However it still doesn't drop the tables in SQL. I tried to find this .xml file in the Publish Profile in my release pipeline but I can't select it.... Anybody has a clue?
Hi @kamil, getting error "A project which specifies SQL Server 2016 as the target platform cannot be published to Microsoft Azure SQL Database v12." I tried solution provided by you but still getting same error can you please suggest
You must change target platform: ua-cam.com/video/4N_fv6d3KQY/v-deo.html It should help. If not - create a question in GitHub with more details: github.com/NowinskiK/ssdt-training/discussions
Hi Kamil, Thanks for the detailed video on sql deployment. It was really helpful. As mentioned in the video tutorial, I am using the "Azure SQL DacPacTask" task to achieve the goal. On the Deployment Package Action, I chose "Script" just to check and verify the script that is being generated. The pipeline is successful and I can see on the logs the script is generated and stored at a location "D:\a 1\a\GeneratedOutputFiles". But, i cannot Open / See the location. Probably the location belong to DevOps server? Is there a way to configure this script output location to my local machine? Basically I want to verify once the script before really publishing.
You are right. The location is on the agent that executes the task(s). You can send the generated script by email, for instance. But better idea, I guess, would be to print script's content to the agent log. I explain such scenario in my commercial course here: learn.sqlplayer.net/view/courses/database-projects-with-ssdt-dacpac/734805-module-6/2436306-report-in-release-pipeline-cd
hi @kamil, thanks for the videos it helps us a lot but getting this error when trying to execute the release SQL72014: .Net SqlClient Data Provider: Msg 50000, Level 16, State 127, Line 6 Rows were detected. The schema update is terminating because data loss might occur. SQL72045: Script execution error. The executed script: RAISERROR (N'Rows were detected. The schema update is terminating because data loss might occur.', 16, 127) WITH NOWAIT; if you reply it will be appreciated.
I will prepare distinct video about this, but short answer is: you can use MsBuild.exe to compile database project (gives you DACPAC file in the result - CI) and SQLPackage/DacFx (should be installed in popular agents) and execute PUBLISH command (CD).
Do you mean connect to database with AAD (Active Directory) authentication with MFA? If so, in Visual Studio it's available as "Active Directory Interactive Authentication".
@@KamilNowinski thanks sir, I didn't find it because I was using the VS from Microsoft Store. I take this opportunity to ask another question a little out of context : can we make a database with MFA switch the number of vCores automatically following a schedule? Thank you!
Thanks for great question. Since I'm collecting all good questions in one place, I hope you don't mind - I copied the question to my publicly open GitHub discussion: github.com/NowinskiK/ssdt-training/discussions/7 It helps me building the knowledge base for the community and will allow us carrying on discussion properly and in better format/UI (YT is slightly limited to do that).
Is there anywhere I can view a log of database changes that occurred when a DACPAC was published? I mean to generate post-deployment report what change are published on target Database
it is possible to make a step in a pipeline with manual call of SqlPackage.exe from command line with parameters for both publishing and reporting and scripting at the same time in one step
Hi Kamil, would you be able to cover pre and post deployment scenarios in a real world application wherein you would have multiple prod releases. In this case how are we gonna manage pre and post sql scripts as there would be only 1 pre and post sql file. How would we manage different releases through that file. Also i had asked a question on one of your previous videos, would you mind checking that please.
What do you exactly mean by "multiple prod releases"? If you mean "multiple databases on prod" then each database project (even in the same solution) can have its own pre and post-deployment script. Each database has to be deployed separately. One database project = one DACPAC file = one database on a physical server. Feel free to carry on the discussion here: github.com/NowinskiK/ssdt-training/discussions/8
@@KamilNowinski when i say multiple prod releases i am referring to multiple releases versions of the same db but on different release dates. So my question is how do we handle different prod releases for the same db for pre and post deployment script scenarios. Should we use the same pre/post deployment script for upcoming and future prod releases. If we use the same, Will the devs have to delete the entries for the upcoming prod releases in the future releases.
@@dariuszspiewak5624 if he/she/that so gangster, why did he/she/that "waste" its 1 hour rather than 10 minutes into the video? I guess it takes him/her/it 6x's as long to know something is a "waste of time"?
Thank you. Demos were explained very well and simple manner. It was useful and informative.
Thanks buddy!
Thanks for the video. It has resolved all my queries. Explained in simple way and to the point. Very helpful!
Thanks!
Thank you so much for the video, finally I can able to deploy securely to on premises server.
Great!
Thank you very much Kamil, you explained every thing in very simple manner, got worth of knowledge from this workshop
Thanks
Hi @Kamil, your video has saved tons and tons hours of searching , watching and understanding. Wish If I could give you the highest award for this topic :). Its highly recommended and my entire team is referring your videos. Superb and very well explained. @Kamil, Thank you again
So very useful. thank you so much
Thank you, Debbie.
Great ! Very informative session 😀 Waiting for more and more videos 😄
Thanks a ton :)
Thanks for the session!
Any time!
Hi @Kamil. Thanks for the wonderful presentation. Is there a particular approach you would suggest to deploy changes only for a couple of tables/objects? I'm still to understand how current data doesn't get affected by new builds and releases. Any light you can share would be great. Again. Thank you!
Hi @Mr Ant. You can apply one trick, but generally, you should aim to deploy the entire database project (the latest, master version). Please make sure you understand the whole concept of SSDT and its deployment - it does deploy only differences, so one step before it happens is the process of comparing source to target. Check out the following free lesson in my course: learn.sqlplayer.net/view/courses/ssdt-essentials/981098-module-4/2932061-introduction
Hi @Kamil. Thanks for the wonderful video. Can you please advice how to do multiple database deployment using CI CD. In my case, The DB Server has around 700 databases. Thanks
Hopefully not all of those 700 database are related to each other (cross-database queries). You need to multiple the same mechanism I showed in this video for each database. If you have group of databases with cross-database references - put them all in one solution. If you have specific concerns - please raise it here, so we can discuss further: github.com/NowinskiK/ssdt-training/discussions
@@KamilNowinski , Thanks for the details. In my case all the 700 databases are of the same structure. All we need to do is to update the Stored Procedures / VIEWS / FUNCTIONS /TABLE in an incremental manner. I'm going to start my work and will let you know if I have an questions. Again, Thanks!
Very good information and session thanks a lot.
I am looking for the same using Jenkins could you please have the one using Jenkins will be greatly helpful for me.
Thanks in advance .
Janam, one of my friends, Gavin, wrote this article a few years ago which might help: gavincampbell.dev/post/jenkins-windows-git-ssdt-profit/
However, if there is no ready-to-go task for publishing DACPAC, you can use PowerShell task to execute SQLPackage.exe to do this. I explained this in the following lesson of my course: learn.azureplayer.net/view/courses/database-projects-with-ssdt-dacpac/734805-module-6/2417631-publishing-with-sqlpackage-powershell
The above lesson is free! HTH. Enjoy!
Great explanation , Thanks for taking to time put in all the remedial steps, generally where I get frustrated. I got as far as generating the script in VS I want to publish(59 min) , Repo version has a view with a comment --TEST, I have an existing DB in Azure I want to update with the change (Alter view with a comment --TEST,) , when I generate the script, no code is generated to make the update but if I was to create a new DB all object code is created, what am I doing wrong? I want to update existing DB's with new or altered objects
Comment is not a real change to database's object, so likely due to that you see no changes in migration script generated.
Hi @Kamil.
Is it possible to use YAML (I'm using Azure Data Studio as my source control/branch manager into an Azure DevOps remote repo, along with build/deployment pipes) for those builds and deployments?
I set up a lower environment with Red-Gate SQL Source Control and SQL Change Automation, but I want to try out SSDT using ADS and try to build an automated build/deployment pipe to compare workflows. I'm kinda liking ADS more than the red-gate solution, even if not as intuitive as the RG solution, but was hoping you'd have used YAML for your demo. I guess for POC purposes the Classic is enough to get started.
Which would you recommend and why?
Thanks so much for your presentation!
Hi Ryan, Classic UI for POC is absolutely enough. The differences between state-based vs migration-based approaches are huge, hence I'm not going to describe them here. My friend, Alex Yates, made an excellent comparison though which you can read here: github.com/NowinskiK/ssdt-training/blob/master/docs/Redgate-vs-SSDT-4.pdf
I hope this will help.
How can we implement a ci/cd pipeline for multi tenant db
Yes. Do you mean multi-databases (one per tenant) or different Azure Directory tenants?
@@KamilNowinski muti-databases(one per tenant), can you please do a tutorial on this?
The concept is exactly the same, but you need to repeat to against n databases: 1) build (generates DACPAC) 2) generate migration script (against target database), 3) Execute migration script 4) Repeat for next database. Did I miss something?
When you have errors just exclude them :D
haha, I know what you mean (28:49), but that was intentional for this demo purposes. Of course, you can always fix all the errors or suppress them.
Kamil, Very informative and thanks for sharing this. I have a solution which has multiple databases in the same solution and as such need separate dacpac files created for each database. How can that be possible in the Azure DevOPs CI portion?
That's absolutely not a problem. Your solution contains a few separate databases and the build of the solution file will build all projects in it. Each project has its own dacpac file, so having 4 databases in solution you'll get 4 dacpac files.
@@KamilNowinski Thank you for your prompt response. You are right, when I build my solution manually in VS, it creates separate dacpac files in the respective bin folders. The issue I am facing is that I am using the same Azure DevOps CI as your video. The process uses system variable, and in my case root folder solution and as a result one dacpac called root-folder. How do I publish 4 dacpac files from CI pipeline in Azure DevOps for this one solution? Thanks in advance.
I am getting this error 2023-02-27T09:39:37.6420094Z ##[error]*** Verification of the deployment plan failed.
2023-02-27T09:39:37.6483928Z ##[error]Error SQL72031: This deployment may encounter errors during execution because changes to [***] are blocked by [db_ddladmin]'s dependency in the target database.
Raised the issue here: github.com/NowinskiK/ssdt-training/issues
I will take a look at spare moment.
How do we manage logins and users permission when deploying multiple environments?
For example: if we have users User1 & User2, User1 should not be present on Dev and User2 should not exists on UAT?
Check these two examples from my commercial course (code is free):
github.com/NowinskiK/ssdt-training/blob/master/src/Variables/ContosoRetailDW/Deployment/Environment-specific.sql
github.com/NowinskiK/ssdt-training/blob/master/src/PrePostDeployment/ContosoRetailDW/Scripts/CreateUsers.sql
If you interested in the whole course: learn.azureplayer.net/ssdt-essentials
If I create a new view and pushed the update to repo and pipeline deployed that changes to target DB and if I revert that new view commit, will the release pipeline will delete that view or else need to create a new script in ssdt for droping that view.
It depends on the publish options. Check this out (free): learn.azureplayer.net/ssdt-tips
Thank you, but Visual Studio build not working with SSDT project, prefer use MSbuild I think
It should works as presented in the video. Do verify whether you have SSDT extension installed in Visual Studio (18:40)
@@KamilNowinski yes, it builds fine locally, but via agent it can't find the path "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets" was not found"
So i use msbuild and it works fine. Maybe it's related to the enterprise version or something else, visual studio is not the most stable product, hehe
@Kamil, thanks for the video really helpful. It works fine, but when I delete a table in DEV it doesn't delete it in prod. I created in VS SSDT a publish.xml which states : True
However it still doesn't drop the tables in SQL. I tried to find this .xml file in the Publish Profile in my release pipeline but I can't select it....
Anybody has a clue?
Try to add your problem here: github.com/NowinskiK/ssdt-training/discussions
and provide more details about your configuration and what you do.
Can we include system views in the sql solution file? As some of my views are referring the system views..
In such case you just need to add "master" (system) database to references and that's it. All references to any system views will be found.
Hi @kamil, getting error "A project which specifies SQL Server 2016 as the target platform cannot be published to Microsoft Azure SQL Database v12." I tried solution provided by you but still getting same error can you please suggest
You must change target platform: ua-cam.com/video/4N_fv6d3KQY/v-deo.html
It should help. If not - create a question in GitHub with more details: github.com/NowinskiK/ssdt-training/discussions
Thanks!!
Welcome!
Hi Kamil,
Thanks for the detailed video on sql deployment. It was really helpful.
As mentioned in the video tutorial, I am using the "Azure SQL DacPacTask" task to achieve the goal.
On the Deployment Package Action, I chose "Script" just to check and verify the script that is being generated.
The pipeline is successful and I can see on the logs the script is generated and stored at a location
"D:\a
1\a\GeneratedOutputFiles".
But, i cannot Open / See the location. Probably the location belong to DevOps server?
Is there a way to configure this script output location to my local machine?
Basically I want to verify once the script before really publishing.
You are right. The location is on the agent that executes the task(s). You can send the generated script by email, for instance. But better idea, I guess, would be to print script's content to the agent log.
I explain such scenario in my commercial course here: learn.sqlplayer.net/view/courses/database-projects-with-ssdt-dacpac/734805-module-6/2436306-report-in-release-pipeline-cd
hi @kamil, thanks for the videos it helps us a lot but getting this error when trying to execute the release
SQL72014: .Net SqlClient Data Provider: Msg 50000, Level 16, State 127, Line 6 Rows were detected. The schema update is terminating because data loss might occur.
SQL72045: Script execution error. The executed script:
RAISERROR (N'Rows were detected. The schema update is terminating because data loss might occur.', 16, 127)
WITH NOWAIT;
if you reply it will be appreciated.
Your target table already contains some data, hence the error. Do check the publish options.
How can i do this with github workflow? Plz provide reference if any. Thanks!
I will prepare distinct video about this, but short answer is: you can use MsBuild.exe to compile database project (gives you DACPAC file in the result - CI) and SQLPackage/DacFx (should be installed in popular agents) and execute PUBLISH command (CD).
Thanks for the video! Can I connect to a database with M-FA? I can't see that option when I try it in VS. Thanks
Do you mean connect to database with AAD (Active Directory) authentication with MFA? If so, in Visual Studio it's available as "Active Directory Interactive Authentication".
@@KamilNowinski thanks sir, I didn't find it because I was using the VS from Microsoft Store. I take this opportunity to ask another question a little out of context : can we make a database with MFA switch the number of vCores automatically following a schedule? Thank you!
How do you perform delta changes using SSDT. Changes to the schema of tables, views etc?
Keyser, sqlpackage.exe generates delta script by comparing DACPAC file to target database. I'm explaining the process in this video - starts at 46:30
Hi @Kamil, How can we deploy with user permissions ?
github.com/NowinskiK/ssdt-training/blob/master/src/PrePostDeployment/ContosoRetailDW/Scripts/CreateUsers.sql
@@KamilNowinski Thank you for the update.I will try it
Hi kamil , for azure devops I want deploy my SQL db by uing service principal not for server athantication
I have it on my plans! Stay tuned.
Also kamil how do we handle rollbacks. Is Reverting from git the only option available. Rollback scenarios might occur in real world scenarios right
Thanks for great question. Since I'm collecting all good questions in one place, I hope you don't mind - I copied the question to my publicly open GitHub discussion:
github.com/NowinskiK/ssdt-training/discussions/7
It helps me building the knowledge base for the community and will allow us carrying on discussion properly and in better format/UI (YT is slightly limited to do that).
Is there anywhere I can view a log of database changes that occurred when a DACPAC was published? I mean to generate post-deployment report what change are published on target Database
You can generate script or report before the deployment.
it is possible to make a step in a pipeline with manual call of SqlPackage.exe from command line with parameters for both publishing and reporting and scripting at the same time in one step
Hi Kamil, would you be able to cover pre and post deployment scenarios in a real world application wherein you would have multiple prod releases. In this case how are we gonna manage pre and post sql scripts as there would be only 1 pre and post sql file. How would we manage different releases through that file. Also i had asked a question on one of your previous videos, would you mind checking that please.
What do you exactly mean by "multiple prod releases"?
If you mean "multiple databases on prod" then each database project (even in the same solution) can have its own pre and post-deployment script. Each database has to be deployed separately. One database project = one DACPAC file = one database on a physical server.
Feel free to carry on the discussion here: github.com/NowinskiK/ssdt-training/discussions/8
@@KamilNowinski when i say multiple prod releases i am referring to multiple releases versions of the same db but on different release dates. So my question is how do we handle different prod releases for the same db for pre and post deployment script scenarios. Should we use the same pre/post deployment script for upcoming and future prod releases. If we use the same, Will the devs have to delete the entries for the upcoming prod releases in the future releases.
@@devopsbharatiya5418 , I answered on the thread above.
introduction @ 6:48
"okay, so let's get started" @ 10:36
Sean, have you seen time table in the description? :)
@@KamilNowinski apparently not. :-)
I can hear you
Cool. I hope the video was helpful.
1 hour wasted for telling me what others do in 10 min
That was intentional for people who are not fluent with this technology and want to learn slowly step by step.
Mate, not everyone is doing what you do in 10 mins. Some do it in 5 secs, so don't be cocky.
@@dariuszspiewak5624 if he/she/that so gangster, why did he/she/that "waste" its 1 hour rather than 10 minutes into the video?
I guess it takes him/her/it 6x's as long to know something is a "waste of time"?