How to automate deployment of Microsoft SQL database with Azure DevOps?

Поділитися
Вставка
  • Опубліковано 5 лис 2024

КОМЕНТАРІ • 116

  • @anilnair5978
    @anilnair5978 2 місяці тому

    That was a great explanation also request you to give demo of using azure active directory integrated

    • @KamilNowinski
      @KamilNowinski  2 місяці тому

      Thanks! It's very likely I will prepare refreshed version of this video and I can use AAD Integrated authentication method.

  • @sateeshkumar8152
    @sateeshkumar8152 4 роки тому +1

    Hi Kamil, Thanks for the video and it really helps.
    One question: In the second part, you have modified a table and Release pipeline was able to identify the changed object and deployed that object only. Is this identification done by "Azure SQL Database deployment" task? Hope the Build Pipeline dacpac always consists of all the database objects. I am doing on-premise deployment, so I am making use of "SQL Server Database deployment" task which is not identifying the changes objects instead deploying all objects every time.
    Can you suggest?

    • @KamilNowinski
      @KamilNowinski  4 роки тому

      Yes, exactly. A build pipeline compiles and builds a dacpac file which contains all the objects (metadata). Release pipeline - "Azure SQL Database deployment" leverages SQLPackage.exe behind the scenes which actually compares structure from dacpac file to target database structure. You've got differential T-SQL script as an outcome. When it comes to "SQL Server Database deployment" - this task also uses sqlpackage.exe and dacpac to generate differences and deploy them (unless you choose another option in "Deploy Type").

  • @thejareddy6368
    @thejareddy6368 3 роки тому +1

    Very Informative, I've other projects in the same solutions and hence in the build pipeline I chose the specific .csproj but when I try to publish the artifacrs I'm getting this error "##[warning]Directory 'D:\a\1\a' is empty. Nothing will be added to build artifact 'drop'." So I went to Build step logs and found out it copying the dacpac and dlls to "\bin\deug" instead of "bin
    elease". Can you please suggest me what am I missing here?

    • @KamilNowinski
      @KamilNowinski  3 роки тому

      Thanks. BuildConfiguration (in Build Pipeline) is the variable that determines the target folder (Debug/Release).
      It's being used in the "Build" task (Configuration field) and "Copy files" task (Contents field).
      Double-check what do you have in these fields and whether the BuildConfiguration variable exists.

  • @SantoshSingh-ki8bx
    @SantoshSingh-ki8bx 4 роки тому

    Thank you, Kamil. You explained it very well so that a person like me from operation background can grasp it easily. One question I have that you have taken one source file from your GitHub. Is it mandatory to have this file ? Let say if I have to develop thing for my organization. How can I achieve this. It will be a great help for me. Can you share your GitHub repo also.

    • @KamilNowinski
      @KamilNowinski  4 роки тому

      @Santosh, thanks for your kind words.
      Answering your question: yes, you have to have dacpac file somewhere as a source of database. So, before you build deployment like that, you must prepare database project with SSDT in Visual Studio. You can create a new, empty one or import existing database from a physical server. Pretty soon, I will publish a short video on how to do that.
      The code (SSDT database project) for this video is already available on my GitHub repo here:
      github.com/NowinskiK/ssdt-demo

  • @kapa2220
    @kapa2220 3 роки тому +1

    What about the data? What if on the DEV we add some single static table with some business rules and we want to move it on prod? To be honest I wasn't able to find any valuable martials on YT for data migration for SQL in Azure DevOps.

    • @KamilNowinski
      @KamilNowinski  3 роки тому +2

      Check my other video in this channel: Scripting static data from SQL Server tables for SSDT database project - ua-cam.com/video/OWnWkriq4uY/v-deo.html

  • @asharnavya
    @asharnavya 4 роки тому

    I hadn't watch full video when i wrote this question, i was impressed with a big screen behind you, can you please let me know the name of it and size? I will buy for my workstation.

    • @KamilNowinski
      @KamilNowinski  4 роки тому +1

      Sure :) My one is ASUS ROG STRIX XG49VQ, 49" DFHD (3840x1080). You might also consider another one, even with better resolution which is Dell U4919DW (5120 x 1440 at 60Hz)

    • @asharnavya
      @asharnavya 4 роки тому

      @@KamilNowinski Thank you sooooooooo much!!!

  • @alishbasnet4249
    @alishbasnet4249 4 роки тому

    Hey kamil , your video is really informative . Can you also publish some article or videos on how rollback is adopted in sql azure database or on prem database in pipeline . What methods are being adopted?

    • @KamilNowinski
      @KamilNowinski  4 роки тому +1

      Alish, to be precise here: once you commit the transaction covered the deployment script there is no option to rollback the changes automatically and you're aware of that, right? Hence, you want to see how to rollback the transaction of deployment script which is executed during release pipeline, I guess, correct?

    • @alishbasnet4249
      @alishbasnet4249 4 роки тому

      @@KamilNowinski thats correct kamil, currently i am using database snapshot in approach in the pipeline before deployment , do you think its good approach?

    • @KamilNowinski
      @KamilNowinski  4 роки тому +1

      @@alishbasnet4249 Yes, snapshot of the physical database might be one option if that suits your needs. Another option is to wrap up the entire script in a transaction. You can control that behaviour with publish option.

    • @alishbasnet4249
      @alishbasnet4249 4 роки тому

      @@KamilNowinski is there any command to script it in the pipeline ?? And any suggestion for azure sql database since there is no snapshot available for azure sql db .

  • @pankajkrlakhchaura
    @pankajkrlakhchaura 2 роки тому

    Great knowledge sharing Kamil. I have a query if you could answer. Why you are creating a separate pipeline for SQL deployment. Can not we have tools to deploy the SQL artifacts with code? I have a big project which has three components to deploy together. Angular for UI, .Net core for API and SQL tables and procedure. when a developer completes the development of a new functionality then he commits Angular code, API code and SQL tables and stored procedures. Now the challenge is to deploy all these three parts together ( in sequence of course ) through Ci/CD. I do not see any way to do this. I have googled it but did not find the solution. For code , CI/CD pipelines is there but it is not in sync with the SQL objects. Do we have any way to do this?

    • @KamilNowinski
      @KamilNowinski  2 роки тому

      First of all, as you noticed, you must deploy each component separately in sequence. Because of the different nature of each component or code - you must treat them distinctly too: a separate task for Angular, another for .NET Core/API, another one for SQL database. For an SQL database, you need actually two steps: one for infrastructure (Azure SQL Server) and the second for database and all its objects (tables, SP, etc).
      👨‍🎓 If you want to learn more - consider my commercial course ($99 only):
      learn.sqlplayer.net/ssdt-essentials

  • @alexreactnow
    @alexreactnow Рік тому

    Hi Kamil, thanks for the video! I have a question, how is granted the network permission to Azure SQL from Devops agent? And in case of being deployed in a SQL VM how could I grant the network access? Thanks.

    • @KamilNowinski
      @KamilNowinski  Рік тому

      You need to configure Azure SQL Server appropriately in Security/Networking tab. One of the easiest option is to check this option: "Allow Azure services and resources to access this server." you can find on the bottom of the page.

  • @sathiyalr
    @sathiyalr 3 роки тому

    Thanks Kamil..Very useful. Thank you so much

  • @fortnitegamer8306
    @fortnitegamer8306 4 роки тому +1

    What permissions are needed to be able to connect to an on-prem server and Azure?

    • @KamilNowinski
      @KamilNowinski  3 роки тому

      Alex, you need db_owner. No only for connection purposes, but mainly due to type of actions need to be done.

  • @punsiitg
    @punsiitg 4 роки тому

    Hi Kamil,
    Thank you for sharing the details on how to continuously deploy to Azure SQL databases. In the classic editor, is it possible to
    1) only deploy to QA on a schedule and continuously to Dev stage ?
    2) add approvals and checks such that before it is released to QA stage, someone must approve it ?
    Thank you.

    • @KamilNowinski
      @KamilNowinski  4 роки тому

      Puneet, answering to your questions:
      1) you can deploy to any selected stage (when creating new release you can mark them as to be run manually) or in an arranged order,
      2) Yes, the other user or group of users get the email and must approve it

  • @cjilmacy8188
    @cjilmacy8188 4 роки тому

    Hello and thanks for this awesome video. I was just thinking, is there a way to automatically release any changed script (once committed) at once? I mean is it possible to release multiple scripts at once using one SQL Database Deployment Task?

    • @KamilNowinski
      @KamilNowinski  4 роки тому

      Yes, it is possible. You can automatically run CI (build pipeline) whenever a change is pushed. At the end of this process, you can subsequently run CD (Release Pipeline). Here you use SQL Database Deployment Task which uses SQLPackage under the hood to compare DACPAC (result of build done in CI) to target database and generate ONE migration script contains all changes made in your code since the latest deployment. Is that make sense and answering your question?

  • @thorstenquint601
    @thorstenquint601 3 роки тому

    Hi Kamil, thanks for this great How To. Very clear explained and understandable. Unfortunatly I'm facing two issues. First is, I created the build pipeline as you explained. But the build process runs into an error: Process 'msbuild.exe' exited with code '1'. The error is caused during the task "Creating a model to represent the project" I can see a FullyQualifiedErrorID "Microsoft.Powershell.Commands.WriteErrorException,Invoke-VstsTool. Can you help me to fix this?
    Other point ist, I'm not able to see the table designer in VS2019 for my Azure SQL . With local SQL-Server it works. What can the issue be? Thanks a lot for your response
    Thorsten

    • @KamilNowinski
      @KamilNowinski  2 роки тому

      Hi Thorsten. UA-cam is not perfect place to conducting that kind of conversation, but GitHub has a great feature for that. If this issue is still valid please submit it here: github.com/NowinskiK/ssdt-training/discussions and give more details. Thx!

  • @deepakpatil5059
    @deepakpatil5059 Рік тому

    Hi @Kamil,
    I have setup CD Pipeline successfully. I tried to delete existing Trigger in Dev and published successfully. New artifacte doesn't have that deleted trigger i mean parameter.json file doesn't have that deleted trigger but when i create Release deployment to UA this trigger is not getting deleted.
    Does the deleted activity doesn't move through Release? Do we need to delete it manually?
    Thanks,
    Deepak

    • @KamilNowinski
      @KamilNowinski  11 місяців тому

      First of all, your DACPAC file should not contain deleted trigger, then during the deployment (publish action) the script should generate DROP TRIGGER when you set up appropriate publish options.

  • @agasti2007
    @agasti2007 3 роки тому

    Hi Kamil,
    I had 2 questions and would really appreciate your inputs.
    1. In the build pipeline, what's the significance of the second task where we copy to artifact staging directory and why we need to do that?
    2. In our database (AZ SQL database), there are a few tables whose DDL change dynamically as a part of ETL? As a result, I am unable to bring the DB under GIT as it modifies schema of those tables in the release. I can't separate these tables from the DB as they are used heavily in other procs and views as these are basically contain source data. I could not find any way to make the project "ignore" the schematic differences on these tables (isolated in a separate schema).

    • @SloMoShort
      @SloMoShort 3 роки тому +1

      Change the “action” property of all scripts for those tables in the database project from “build” to “none”. Then the database project won’t make and ddl changes.

    • @agasti2007
      @agasti2007 3 роки тому

      @@SloMoShort That seems like a great idea. I will try it.

    • @KamilNowinski
      @KamilNowinski  3 роки тому

      ad.1) That step (Copy files to) is not necessary, helpful though IMO. It helps you better filtering, copy and thus prepare all files to be wrapped up as artifact in the next step.

    • @KamilNowinski
      @KamilNowinski  3 роки тому

      "Ignore" is one option. Another is to not include those files in the database project. In that case, you get a warning from other objects (SP, views) that referenced table cannot be found and thus can't set option "treat warnings as errors". It's not perfect solution then, because all the warnings will be blended together and you can miss important missing refs.

  • @santhoshsreshta
    @santhoshsreshta 4 роки тому

    Really great content, A new subscriber here 😄

  • @devopsbharatiya5418
    @devopsbharatiya5418 3 роки тому

    Hi Kamil, Video is very good, informative and helpful where the DACPAC which only has the DDL changes can be deployed. However i would need your guidance on the below questions. If you could help me with some guidance, it would be really helpful to improve our process.
    Q1. How do we handle the DML changes which are common like master data and also environment DB specific data.
    Q2. Should Action property of all the sql files in the DB project be set to Build or None when there are around 100+ developers working on the same DB project but different DB objects.
    Q3. Currently in our project, there are around 10+ DB's as part of the same DB solution file because there are references between this DB's. As a result, the build time for the entire DB solution is more than 60mins which is not acceptable in a CI scenario. In order to mitigate this Build Time, our DB automation team has configured the DB project in such a way that all DB objects are excluded and only the DB object which needs change is required to be included by developer say developer1 as part of the DB project which will only have DB changes of that developer1. The developer2 who commits his change after developer1 will not have developer1 changes and can be deployed and independently. Is this the correct approach because as per my understanding a CI build should have all the developers changes as part of the build as it is existing code change in the repo + new change to be checked in by the developer? Is there any better way to reduce the build time when there are references between the DBs? Also the other thing is that PROD release may be on a different day as compared to the checkin day wherein it will make more sense to deploy the latest Release-100 out 100 Releases rather than deploying Release-1To100 on the release day because each developer Build Artifact is different due to exclusion and inclusion.

    • @KamilNowinski
      @KamilNowinski  3 роки тому +1

      Thanks for great questions. Since I'm collecting all good questions in one place, I hope you don't mind - I copied the question to my publicly open GitHub discussion:
      github.com/NowinskiK/ssdt-training/discussions/
      It helps me building the knowledge base for the community and will allow us carrying on discussion properly and in better format/UI (YT is slightly limited to do that).

  • @swapnilgupta1364
    @swapnilgupta1364 3 роки тому

    Hi Sir,
    I have to add test plan before creating Build, From this video i understood Creating build and release, now facing issue for doing testing before build. can you please give some idea. or if you create one video for that as well will be really helpful.

    • @KamilNowinski
      @KamilNowinski  3 роки тому

      What kind of issue are you facing? What do you want to test if you don't have build completed?

  • @rodrigueschetan
    @rodrigueschetan 4 роки тому

    Hi Kamil,
    I have one question regarding dacpac.
    Can we select only those schema from database, which we want to deploy and ignore others.
    Thanks,
    Chetan

    • @KamilNowinski
      @KamilNowinski  4 роки тому +1

      It is possible but not available out of the box.
      You must use an external library which extends capabilities of SqlPackage and allows you filtering objects during DACPAC deployment.
      Check out this tool: DeploymentContributorFilterer. Link to the tool and other useful are here: sqlplayer.net/ssdt/

  • @anvicom
    @anvicom 2 роки тому

    How do we set the order of SQL scripts that should get deployed? What if I want to have the TableA.sql to be deployed first and then deploy TableB.sql, and then stored procedure A followed by stored procedure B? Where do we configure the script dependencies when dealing with both kinds of deployment, dacpac deployment and SQL File(s) deployment .

    • @KamilNowinski
      @KamilNowinski  2 роки тому

      The beauty of the usage of DACPAC files and SQLPackage for deployment is that SQLPackage does all these things for you. Full stop.
      Check out this very short & free lesson in my commercial course:
      learn.sqlplayer.net/view/courses/ssdt-essentials/981098-module-4/2932061-introduction
      👨‍🎓 Consider buying full access to the course if you want/need to learn more.

    • @SloMoShort
      @SloMoShort 2 роки тому

      You can’t. If the tables are in the build they will get compiled together. If you want to do them in order. Create each table script as an sp_executesql script and put them in order in the post deployment

  • @philiplewis4495
    @philiplewis4495 3 роки тому

    Hi Kamil,
    I'm running into the issue where the build is running successfully, but a warning is generated because no build objects are being picked up by the copy or drop tasks. When I examine the build log it shows a dacpac is being created "D:\a\1\s\myDatabase\bin\Debug\myDatabase.dacpac". In the copy task I have "found 0 files". There is a resource group project in my solution and when I include it in the build it finds these files. I'm not sure what next steps to try so any advice would be good. Thanks

    • @philiplewis4495
      @philiplewis4495 3 роки тому

      Hi Kamil, I was looking through some of the comments and hit one with a similar issue. It was my vs project Build Configuration was set to debug, whereas the pipeline variable was set to Release. The copy task was going to the wrong location to pick up the file. Thanks for the video.

    • @KamilNowinski
      @KamilNowinski  3 роки тому

      @@philiplewis4495 yeah, this is what I wanted to suggest to check. I happy that it works now.

    • @philiplewis4495
      @philiplewis4495 3 роки тому

      @@KamilNowinski Thanks for the note. I'm looking forward to using this for deployments

  • @rajarajan7738
    @rajarajan7738 Рік тому

    Hi @Kamil,
    I have a question…
    I have a bunch of sql scripts in a repo folder . And I want to deploy those script files which has only changes …
    Thanks

    • @KamilNowinski
      @KamilNowinski  Рік тому

      ...changes... in comparison to what?

    • @LongLe-qz1ui
      @LongLe-qz1ui Рік тому

      @@KamilNowinski compare to current version of those scripts existing in database

  • @michaeledwards6497
    @michaeledwards6497 3 роки тому

    Once I run the release pipeline, I'm getting "azure sql pipelines A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections.
    (provider: TCP Provider, error: 0 - No such host is known.)". I Tried allowing all Ips form 0.0.0.0 to 255.255.255.255 but still getting the same error.
    Note: I'm able to login into the server using SSMS

    • @KamilNowinski
      @KamilNowinski  3 роки тому

      Check precisely what did you put in 'Azure SQL Server' field as a name.

  • @juanguillermopalma
    @juanguillermopalma 4 роки тому

    Hi Kamil, how are you?, I saw this video and I have a doubt about how can I do rollback? for example I drop a table, but now I need to rollback that change. so the question is ¿How can I program a backup or something to do a rollback?

    • @KamilNowinski
      @KamilNowinski  4 роки тому +1

      Hi Juan, great question. There is nothing to do with DACPAC and SQLPackage deployment here, unless you want to rollback transaction during the deployment. I believe this is not your scenario. Once deployment has been finished successfully - there is no open transaction remain. Consequently, you can not do the rollback at all. The question would be: who and why dropped that table? If the table was removed from SSDT project - just revert the code back to the point where you had the table and next database deployment redeploy the table. No data? Sure. Use the latest backup to restore database somewhere aside and restore data. Anyway, that scenario should not happen when you have deployment process automated and new release goes through dev, test, uat/pre-prod environment before.

    • @juanguillermopalma
      @juanguillermopalma 4 роки тому

      @@KamilNowinski Exactly but, can you run a backup automatically before to your deployed with azure devops?

    • @KamilNowinski
      @KamilNowinski  4 роки тому

      ​@@juanguillermopalma​, Generally speaking, backup policy and schedule should be a separate thing, but of course you can. Insert such task before DACPAC deployment. I'd suggest using DBATools PowerShell module, but there is plenty of ways how to do so.

  • @michaeltadesse3678
    @michaeltadesse3678 2 роки тому

    Hi Kamil ,i was using DaveOps Board Data as a source for my power bi report by import mode but now i want to make the refresh dynamic so i tried to change the import mode to Live Connection for that i have to move the data from DevOps to SQL , so now my question is here ,,,,,,1.can i move Data from DaveOps to SQL??? if yes ,,,How??? please i need help thanks.

    • @KamilNowinski
      @KamilNowinski  2 роки тому

      What do you mean "make the refresh dynamic"? How often do you need refreshing data from DevOps? Do you need real-time dashboard? If so, I'm not aware of any out of the box solution, but you can build bespoke solution using service hooks which can generate actions based on Azure DevOps events. Then use Azure Event Hub and convert events into rows (insert/update) in Azure SQL database. Not perfect and easy, but will work.

  • @magnushagdahl54
    @magnushagdahl54 4 роки тому

    Hi Kamil,
    thanks for an excellent tutorial.
    We are trying to be clever in our deployment, but it gives odd errors. Such as this (when trying to deploy in Azure DevOps using DacPac):
    Error: Warning SQL46029: Error validating element [dm].[usp_dPrimaryDefaultStore_bkp20180815]: Unexpected end of file occurred.
    This object does not exist in the VS project, but only in the target database. Why does it give errors on non-existing objects? We have set these params for the SQLPackage.exe:
    /p:BlockOnPossibleDataLoss=false /p:AllowDropBlockingAssemblies=true /p:TreatVerificationErrorsAsWarnings=true
    My second question, is it possible to have the deployment totally exclude some schemas? We have a core set of schemas (dbo, dm, raw) that we version control in Git (own repo). In the database we also have another schema (adhoc) that our powerusers can use. However this schema is irrelevant to version in Git. But we seem to get validation/deployment errors due to the DacPac not having all schemas that the target has.
    Any advice is much appreciated.

    • @KamilNowinski
      @KamilNowinski  3 роки тому

      hey Magnus.
      Answering your first question: I don't know as I can't see the project you've got. Sometimes, SSDT behaves weirdly in specific circumstances. The second one: Yes, it's possible, but not out of the box. You must use extension to SSDT, like DeploymentContributorFilterer. Take a look at this code (PS script) and try it out: github.com/NowinskiK/ssdt-training/blob/master/src/sqlpackage-selective/demo1.ps1

  • @MukkaSiva
    @MukkaSiva Рік тому

    hi everyone i was created a new table in visual studio project and pushed to repo, CICD got succeed, but table is not deployed in target database. can any know the reason and help me. note : i am able to add new columns in existing table and its got deployed in target database.

    • @KamilNowinski
      @KamilNowinski  11 місяців тому

      Please check out my other video on this channel and follow it step by step: ua-cam.com/video/4N_fv6d3KQY/v-deo.html
      I guess you have some small mistake in your DB project.

  • @stefm6764
    @stefm6764 3 роки тому

    Hi am trying to run a batchscript task and i copied the file to the root directory as per the instructions of microsoft, and still it cannot find it so my pipeline fails. So I would really appreciate it if you could help me where I can place the file so that it get's picked up. Is there a way to access the VM directory? ANY IDEAS GUYS?

    • @KamilNowinski
      @KamilNowinski  3 роки тому

      StefM, I'm not sure what batchscript and Microsoft's instructions you are talking about. Is it something related to this video? Please raise an issue here, maybe I'd be able to help once got more details: github.com/NowinskiK/ssdt-training/issues

  • @sreejithrc9411
    @sreejithrc9411 2 роки тому

    We are using SQL server. After release pipeline I am loosing the table indexes. Can suggest a solution for this?

    • @KamilNowinski
      @KamilNowinski  2 роки тому

      Presumably you don't have them in the database project and have "Drop indexes not in source" enabled. Hence the result. Change one of these and that resolves the problem.

    • @sreejithrc9411
      @sreejithrc9411 2 роки тому

      @@KamilNowinski Thanks for your reply

  • @alishbasnet4249
    @alishbasnet4249 4 роки тому

    Hi kamil , I m just stepping into DEvops, I am trying to release the pipeline on my onPremises SQL database not Azure, Is there any firewall setting or any access that i have to configure?. I am getting error with unable to reach the server. Can you suggest?

    • @KamilNowinski
      @KamilNowinski  4 роки тому +1

      Naturally, Azure DevOps Agent must have access to the target SQL Server. You need to open port (by default it is 1433) for incoming traffic. You can use Azure VM to test it from SSMS. Another approach would be provisioning your own DevOps agent in local network.

    • @alishbasnet4249
      @alishbasnet4249 4 роки тому

      Kamil Nowinski I opened a port 1433 and was not able to connect it from release pipeline and after scratching my head for an hour ended up creating the azagent on local machine with the ps script which helped to connect to db finally .But as you mentioned if there port is open on my local machine there is no need of creating the agent locally is it true ?

    • @alishbasnet4249
      @alishbasnet4249 4 роки тому

      Really hats off for your videos its really great content and would like to see more regarding ssis ,ssas , synapse deployment if possible :)

    • @KamilNowinski
      @KamilNowinski  4 роки тому +1

      @@alishbasnet4249, that's correct. You should double-check with your network/infra team whether incoming traffic (from Azure Agent to company network) is not blocked at any stage.

  • @chelseaanthony8417
    @chelseaanthony8417 3 роки тому

    Hey Kamil may be you can help-
    I have DACPAC solution file that has variables that references to the instance name of the Database that is very specific to UAT. However when I deploy the same artifiact to the Production it wont work because that instance of DB doesnt exist in the Prod. So how do we solve this. can we have two DACPAC created that would have in it the two different variable names and how would I deploy the right one to the correct environment.
    I could create a new branch and do a new build and deploy it to the prod, but i would rather have the release setup that you have by cloning the release but got to somehow have this other DACPAC just have this one value of instance that is different for both the environment (UAT and Prod or for that matter Dev, UAT etc)

    • @chelseaanthony8417
      @chelseaanthony8417 3 роки тому

      in this link (social.msdn.microsoft.com/Forums/sqlserver/en-US/302587bf-ff9c-4f22-94dc-a96587f6b304/how-do-we-perform-sqlcmd-variable-substitution-at-deploytime?forum=ssdt) they talk about providing variables to the dacpac files. So can i do a copy task and copy the dacpac to the server and there execute the .dacpac with the variable (SqlPackage.exe /Action:Publish /SourceFile:TestVarProj.dacpac /TargetServerName:TestServer\TestInstance /TargetDatabaseName:TestDB /Variables:TestVar=FooBar)
      Any example of this will help me to resolve the instance name that changes as i move from one stage to the other.
      Again what is the syntax to maintain that variable in the solution file that i will build.

    • @KamilNowinski
      @KamilNowinski  3 роки тому

      You should have ALWAYS only ONE DACPAC file (no matter how many environments you're deploying to).
      Variables are your friends. They are used when you create a reference between databases and can be replaced (substitute) the value while deploying DACPAC to a target server.
      I put your question in Q&A Discussions hub on my repo related to SSDT, so I will not forget to prepare some example for you.
      github.com/NowinskiK/ssdt-training/discussions/2

    • @chelseaanthony8417
      @chelseaanthony8417 3 роки тому

      @@KamilNowinski Thx Kamil looking forward to those examples with great eagerness..

    • @chelseaanthony8417
      @chelseaanthony8417 3 роки тому

      @@KamilNowinski Again i am sorry you may have lot of things on your table.. but any tentative timeline on when this video will be released ? Some of my releases are based on that...and now I creating a long and onerous scripts to have one for each of the instance...Thx in advance

    • @KamilNowinski
      @KamilNowinski  3 роки тому +1

      ​@@chelseaanthony8417 Unfortunately, that won't happen in the next few months. There is a simple and strong reason for that: I'm hugely focused to finish my SSDT online course right now, which premiere is on 3rd July: learn.sqlplayer.net/database-projects-with-ssdt-dacpac
      If you are interested in SSDT very much - this course would be a great deal for you. Sign up for free tips & tricks:
      learn.sqlplayer.net/ssdt-tips
      and you'll receive the pre-sales price (ends this Sunday). Anyway, you can check the free demo code from the course as well, which can help in your case:
      github.com/NowinskiK/ssdt-training/tree/master/src/Variables
      Chelsea, I hope that helps you a bit at least.

  • @sahilchaudhry08
    @sahilchaudhry08 4 роки тому

    How do we integrate key vault for fetching sql credentials?

    • @KamilNowinski
      @KamilNowinski  4 роки тому +1

      You can add an additional step before to download Azure Key Vault secrets and use it/them as $(secretVariables) from pipeline. It's "Azure Key Vault" task written by Microsoft.

    • @sahilchaudhry08
      @sahilchaudhry08 4 роки тому +1

      Kamil Nowinski Thanks a lot for your reply. Is it possible to use Service prinicipal for authentication rather than using admin credentials via Server authentication?

  • @abhishekchaube5773
    @abhishekchaube5773 4 роки тому

    I cannot see a drop folder created. Why is my artifact folder not getting created

    • @KamilNowinski
      @KamilNowinski  4 роки тому

      I can try to help if you provide more details. Could you create a question on Stackoverflow and give me the link? stackoverflow.com/questions/tagged/azure-devops

  • @shirvanian
    @shirvanian 4 роки тому

    Hi Kamil , if there is data in the tables , does it drop the tables and create the object agian?

    • @KamilNowinski
      @KamilNowinski  4 роки тому +1

      It depends. Based on the changes you made. Sometimes add new column only, other times alter the existing columns. When re-creating a table is required - the process copy all data, so in theory, you shouldn't lose any data. Anyway, it depends on the changes on a table. Good question though. I think I will prepare separate video on that topic only. Thanks.

    • @shirvanian
      @shirvanian 4 роки тому

      How about if I want to have a migration script from last release including schema changes and data changes ? Is that a possible .

    • @KamilNowinski
      @KamilNowinski  4 роки тому

      While data is a separate subject, migration script is generated during the deployment process which compares source schema (DACPAC) to the target database.

    • @rimshayousaf8776
      @rimshayousaf8776 4 роки тому

      @@KamilNowinski I tried to rename a column, it generated an error related to possible loss of data. When I added a flag to allow possible loss of data, it deleted the whole column along with values and then added a new column with the new name. Is there a sane way to rename a column with Azure SQL DacpacTask?

    • @KamilNowinski
      @KamilNowinski  4 роки тому

      @@rimshayousaf8776, if you want to cope with data in existing tables, you must leverage pre-deployment script to manage that. The pre-deployment script should deliver the change in a schema as well as data. Remember to protect your (pre) script against multiple runs. That method is similar to migrate-based one and applying it to SSDT makes your approach hybrid. Which is totally fine.

  • @rajasekharreddy8427
    @rajasekharreddy8427 3 роки тому

    How to execute mutiple sql in Azure devops to Azure Database

    • @KamilNowinski
      @KamilNowinski  3 роки тому

      I'm not sure what you want to achieve. Could you put your question to the forum and elaborate on it, please? We can discuss then: github.com/NowinskiK/ssdt-training/discussions

  • @santhoshsreshta
    @santhoshsreshta 4 роки тому

    hey, visiting once again, Is there any possibility to find the delta script which release pipeline has executed after it compared DACPAC against the target database ??
    lets say I've added a column in my db and I build the solution (DACPAC is created) and release will go and deploy it, (I guess, here it compares DACPAC with target db and it understand the delta code and executes it.), Is there any possible way to get that code which it executed after comparison.??

    • @santhoshsreshta
      @santhoshsreshta 4 роки тому +1

      I got it.. used same task deployment and selected action as script.. instead of publish.. thank you..

    • @KamilNowinski
      @KamilNowinski  3 роки тому

      Yes, that's correct. Thanks for answering on this. I couldn't reply earlier.

  • @santhoshsreshta
    @santhoshsreshta 4 роки тому

    Hey, I understand this is a state-based approach for database deployments, do we have any content in your blog or vlog for migration based approach as well, if so, could you please share the links.?
    referring to this link for state-based and migration based: devblogs.microsoft.com/azure-sql/devops-for-azure-sql/

    • @KamilNowinski
      @KamilNowinski  4 роки тому +1

      Yes, all of these are about a state-based approach. I don't have yet migration-based approach yet. However, a hybrid approach is often used within SSDT. That kind of post or video will appear on my channel.

  • @AnkurJainSolArchitech
    @AnkurJainSolArchitech 4 роки тому

    Hi Kamil,
    Thanks for the video.... really helpful..
    How do you do selective deployment of tables? e.g. How you can update build to ignore 2 tables out of 5?
    Also is it possible to integrate other deploymentcontributer?
    github.com/GoEddie/DeploymentContributorFilterer

    • @KamilNowinski
      @KamilNowinski  4 роки тому +1

      TBH, I've never had to do that, but the link you provided is absolutely the best way to do this. The library is written by my friend Ed and it allows you filtering objects by schema name, object name, etc. Go ahead with it if your solution required such approach. I will prepare a video explaining that approach in the next months.

    • @santhoshsreshta
      @santhoshsreshta 4 роки тому

      @@KamilNowinski Thanks for the wonderful content and most importantly you have done everything from scratch, which is really helpful for starters like me (A weeks older here ),
      I'm also looking for a similar type of requirement where I need to do selective objects migration, I would love to see that video soon.
      Stay safe! 🙌

    • @KamilNowinski
      @KamilNowinski  4 роки тому +1

      @@santhoshsreshta thanks for your kind words. TBH, selective deployment from DACPAC is on my list to do as video and it should happen soon. Stay tuned.