SQL Server Clusterless Read Scale Availability Group Setup

Поділитися
Вставка
  • Опубліковано 1 лис 2024

КОМЕНТАРІ • 28

  • @Bullbluegill
    @Bullbluegill 2 роки тому

    Hey, I literally just saw this today, not sure why I wasn't notified. Thanks for putting up a video, I have yet to go thru but I will tomorrow

  • @mohammadedalatfard2094
    @mohammadedalatfard2094 10 місяців тому

    excellent video, thank you for your complete explanation.

  • @DeUraq
    @DeUraq 9 днів тому

    Hi, that was a good vid. If i have 3 nodes setup where replica 1 & 2 in sync and replica 3 in async but dedicated for read-only then do i still need to create a new listener only for the 3rd replica or i must assume 1 or may be needed for read-only when 3 is not available. Basically i need to know if i should create multiple listeners.Thanks!

  • @eapradius
    @eapradius Рік тому

    This is excellent!

  • @Bullbluegill
    @Bullbluegill Рік тому

    Ok, so my Sys and User database moves went fine on both servers. I setup AG on the two test servers, however I didn't use a listener or read-only routing. I also left the primary database setting for readable as NO. I'm not sure how the testing will go but it seems to return data i select on the secondary and DB's are "synchronized" with no data loss. My thought was I'll never fail over manually. Just the secondary needs to be readable in the AG. I also have no need for a listener or read-only routing.
    Correct me if I'm wrong in my assumptions above. As long as the DB is replicating and reporting team can select the data they need on the secondary, that's all i need.
    I set the primary as readable "no" because i never want the reporting team reading or selecting from that Prod db (primary). Thus the reason for this AG setup. I've yet to hand this over to the reporting team so I don't know if my settings are sufficient.

    • @odintree5704
      @odintree5704  Рік тому

      Hello CS,
      I am at home right now and do not have access to an AG configuration but you could verify.
      I assume you set the primary to R/W and the connection intent to ReadONly. This I believe will stop any connection string that has application intent = readonly. Check if they can directly connect to your primary without an intent string and see if they can query.
      If they cannot then you are good but if they can then you may need to remove some database level SELECT privileges (on the Primary only) for users that have only read access. This will force them to only read from the secondary whether the connection string contains intent = readonly or not.
      Let me know if that makes sense and if you can directly connect to your primary without the intent string in your connection string and still run a Select statement be fore applying the database level restrictions I mentioned.

    • @Bullbluegill
      @Bullbluegill Рік тому

      @@odintree5704 I guess I should have been more specific, I have one data warehouse developer, he's the only one touching this. he creates the reports and they are published out on the sharepoint server. His access is from a Data Warehouse server via a linked server. So really there would be no accidental way to be pulling from the primary. I'll test what I have and see if it works, that's all I can do at this point. I'll post up the configs i used once I work with him on the setup.
      I'm not sure if I'm making any sense here, but I've not set anything for application intent as I didn't think i would need to.

    • @odintree5704
      @odintree5704  Рік тому

      @@Bullbluegill Thanks CS.
      I do not know if you know the answer but do you know if your DW is multi-dimensional or tabular. Also do you use cubes. How large is the DW. I really like working with DW and I really like the multi-dimensional approach with cubes.
      If you could give a description of your setup I would appreciate it. I am just curious because I read different things but I really want to know what people are actually using not what sales/marketing people are trying to push.
      If you use tabular I would be curious how large the data is and how much memory on your servers.

    • @Bullbluegill
      @Bullbluegill Рік тому

      @@odintree5704 I'll find out for you. I'm still waiting on the DW folks to test...they are super busy with implementing a new ERP system...2 years into it, about 18 months to go.
      I do know this....they use cubes. It's big as I back up the data, about 800GB big comprised of about 6 DB's. They basically do extracts from our PeopleSoft system, load them into the DW daily. All of our reporting is done via the DW and reports are housed in sharepoint. Outside of that, I don't know too much. The read scale AG is for something totally different. It's for reporting off a different set of applications. The DB servers are not super power houses, 4 core, 16 GB and I allocate 12 to SQL. The extract and loading process takes about 5 hours, from midnight to approx 4:30 to 5 am.

  • @Bullbluegill
    @Bullbluegill Рік тому

    I apologize in advance for this question as it's somewhat unrelated to this video for the most part. From my understanding, when going to AG you need to have the same file paths. One server was new and is 2019 and the old was 16 until i did an in-place upgrade. Correct me if I'm wrong, but I do not believe the upgrade gave me an option to set the path to the 15 MSSQLSERVER folder (old path was 13). If it did, i missed the ball there. I doubt it though because an installer isn't going to physically move the mdf and ldf (I was thinking out loud there)
    I need to move the user DB's from the 13 to the 15 folder, I would also like to move the system DBs as well. So now my question is, is there an order in which i should move these? User DB's first, System first, Master First then the rest of system?? I can find a lot on the web about all of this except order in which it should be doner. So this is kind of an off the cuff question but maybe someone else down the line would have this same issue.
    I figured i'd move the system as well simply for consistency moving forward.

    • @odintree5704
      @odintree5704  Рік тому

      Wow - loaded question. As far as I know there is no specific order. But my preferred order and it is mine and I don't say it is right or wrong or the best or that my logic makes any sense. I move the user databases first and then the system and master. My thinking (right or wrong) is that the master database contains the locations of all user databases and files (just in case the location is corrupted in the user databases) and as you move the databases the locations get updated in the master. Then I move Master last and I feel more secure that the Master has the proper locations.
      Obviously to move the system databases you need to follow some very specific rules even after they have been moved - such as manually changing the registery entry, certain service account privileges, etc. See the full instructions here learn.microsoft.com/en-us/sql/relational-databases/databases/move-system-databases?source=recommendations&view=sql-server-ver16
      Good luck - let me know how you made out and if you ran into any problems.

    • @Bullbluegill
      @Bullbluegill Рік тому

      @@odintree5704 That is exactly how i was going to do it as I found one guy have a 2 part post. he did the user, then the system in which master was last. I do understand the process of moving everything, the reg, agent job stuff, default paths and such...so I'll let you know how it goes. Thanks again for the reply. Very helpful. Sorry for the loaded question, but I'm sure i'm not the only one that will run into this.

  • @bawoademinokan428
    @bawoademinokan428 2 роки тому

    Hey man. YOur video are amazing. Can you do a video on Dynamic Data Masking? I would love to see your video on it as everyone out there i just talking and not teaching anything well.

    • @odintree5704
      @odintree5704  2 роки тому

      Sounds like a good topic for a video.

  • @sqljobs8548
    @sqljobs8548 Рік тому

    Why not use a different Ip address for the Listener in RSAG other than primary to avoid changing DNS entries time and again after every failover . Does that not work ? sorry I have only set availability groups in FCI and not in RSAG.

    • @odintree5704
      @odintree5704  Рік тому

      Good question - unlike a cluster enviro where you point to the cluster ip the Listener always has to be pointed to the primary replica. SQL Server is cluster aware and can switch automatically with a cluster but not without a cluster so we have to manual identify the primary since the primary always owns the listener in an AG.

  • @RyazantsevNikolay
    @RyazantsevNikolay 9 місяців тому

    On 18:42 you add a local server IP address as a listener IP. Shouldn't it be a separate IP instead? This IP certainly should not match with a cluster IP if it would be a cluster. Overall, this IP is VIP (virtual IP) but I'm not sure how it works in read-scale.

  • @Bullbluegill
    @Bullbluegill 2 роки тому

    Great video, well done! I do have a few questions though.
    My objective is to create a read only duplicate on a different server for the report writers. I don't intend on doing any failovers ever. That said, when I select initial Data Synchronization, What should I choose? I will be backing the primary DB's up w a different tool on a hourly basis and do not care about the secondary obviously. I guess I'm not understanding all the options. I did restore a backup on my primary server and they are ready to roll. I'm into the wizard and on the "Select initial Data Sync" screen. I just don't know what the best option is for me. All I care about is that as data written/updated/deleted, it synchronizes the data on the replica.
    I have added a listener, but I won't be using it I suppose considering i'm not doing failovers. Correct me if i'm wrong please.

    • @odintree5704
      @odintree5704  2 роки тому

      Hello CS,
      As to the Listener question - you do not need the listener since you will never do a fail over - it is used to make failover easy for client apps and the DBA to be truthful.
      As for the initial synchronization I will go through each option:
      Let's start at the bottom of the list:
      Skip initial data synch - only do this if you are going to manually create the secondary databases on all the replicas - probably not what you want
      Join Only - This is used only if the secondary databases already exist on the replicas - again probably not what you want
      So that leaves Automatic seeding and Full database and log backup. Which are common.
      Automatic seeding: This will automatically create the replicas for all your databases in your group. All SQL Server instances MUST have the same data and log paths.
      Full database and log backup: This method does a full backup and log backup and uses these to restore on the replicas. Must follow some rules as I pointed out in the video concerning same file paths, and network shares.
      So which of these 2 should you use? The automatic seeding is a very good choice and easier to use. But it should not be used on large databases (MS recommends not to use over 5TB). You should have good network speed and sites should be no more than 1,000 miles apart. The reason is because of this method uses the log stream transport and VDI (virtual device interface).
      The full database and log backup will use log backups so remember these log backups will be part of your log chain. Use this option if you cannot do the automatic seeding.
      Now remember all of this is just for the initial creation of the replicas and not the ongoing synchronization process.
      Your ongoing synchronization process is determined by asynchronous or synchronous mode. They both use the log but it is not considered a log backup and does not affect the log chain.
      Therefore which you choose for your initial synch has nothing to do with your on going backups.
      I would suggest the automatic seeding and then use your 3rd party backup utility to continue to backup your primary going forward.
      Hope that clears things up.

    • @Bullbluegill
      @Bullbluegill 2 роки тому

      @@odintree5704 that cleared it all up. I did exactly what you said and it worked fine. I didn't set up read only routing or the listener. selected Automatic seeding and it created the secondary databases and looks good. One thing I want to clear up though. It created a backup and restored it to get the secondary....would be advised to run a full backup on my 3rd party app so the LSN is not screwed up. Not sure if that would happen....

  • @Bullbluegill
    @Bullbluegill Рік тому

    Hello Sir, I'm back again with another question. I know when the DB is created it has the same name. Is there any way to change the secondary to a different name and still replicate properly? I know that doesn't make a ton of sense to have different names but it's how the report writers want it. for example, AdventureWorks (primary) and the secondary name AdvertureWorks_Prod?

    • @odintree5704
      @odintree5704  Рік тому

      Hello CS,
      Good to hear from you. Unfortunately Availability groups as far as I know the replicated database MUST have the same name as the primary. Just to be sure I did look that up in the documentation and that is a requirement. It does make sense.
      Not sure why the report writers want that but just thinking out loud here I believe you have you backups done using a 3rd party software on the primary replica. Could you try just creating an SQL Server Agent Job to do a copy_only backup on the secondary replica and then rename the copy_only and use that for the report writers. I know it seems clunking but not sure what else to do. Maybe someone else may have a suggestion. Not even sure you can do a copy_only backup on the secondary replica without a cluster. I know we can do it when it is in a cluster. Be curious if you can.

    • @Bullbluegill
      @Bullbluegill Рік тому

      @@odintree5704 Thanks for the reply. Yes, I think the same names make total sense, but when it comes to others, they have different ideas usually based off what's easier for them. I think all their code, and it can't be much to change a DB name, was done for transactional replication and a naming scheme of DB_PD and DB_TS (their idea). We are going away from that because it seems way too hands on. Agent always stopping for no reason and not logged. there are constant changes to the DB so almost real time changes to be replicated and moved are essential. I'll just tell them the names are going to be the same, live with it, change your code. I know you are thinking, how can an agent stop running and not be logged anywhere....believe it. Nothing in history, nothing in a log of any type. I believe it's because it's not failing to start so it's not logged. Something the network or server team is doing at night is stopping it but no one seems to want to find it.
      Yes, we use a 3rd party app for backups. That is also one reason we went away from Native Sql backups because agent jobs seemed so unreliable and visibility was somewhat limited even with alerts. We bought Idera SQL Safe, it was cheap, very easy and has been fantastic. Visibility is great, it's super fast too if you don't have encryption enabled using TDE.

    • @odintree5704
      @odintree5704  Рік тому

      @@Bullbluegill Exactly. Developer just needs to do a search and replace. Since the server name is different and probably even the instance name there should be no problem. Easy change.
      One thing I have noticed and was thinking of doing a video on was SQL Server agent jobs and a situation I have noticed. When I create jobs each command gets its own job step. For instance if you wanted to backup 3 databases and issue the backup command 3 times for the 3 databases and put all 3 backup statements in the same jobstep command, if the first backup fails you get a notification, but if the 1st backup succeeds but the next backup fails, no notification is sent - it appears that the job step was successful. But if you put all backup statements in 3 job steps any one that fails you get a notification. When new people who come in and create a job with multiple commands in one step I get them to change it when I find it. I just do not have the time to trouble shoot issues that can be easily rectified in the first place with a little extra effort.
      I have never dug too deeply into why this happens I just create job steps for each command and I do not get the problem of step failures for these types of problems.
      I assume you checked the sysjobstepslog and sysjobhistory tables in the msdb database.

    • @Bullbluegill
      @Bullbluegill Рік тому

      @@odintree5704 my reasons for gong away from Transactional replication was made well before the whole stopping of the agent jobs. When you have DB part of replication, when there are changes you have to remove the articles (tables) from replication, then add them back in. To me, that technology was just too hands on. We have a process, when followed correctly it was fine. When I wasn't notified or they didn't check articles back in we had report discrepancies. So it was not a set it and forget it type of process. So our minds were already made up to move to AG. The agent jobs were just a aftermath basically. I spent time, but didn't spin those wheels too long as I don't have it and going away from it anyway. I just need to make all file paths and instance names the same and we'll be set.
      Those tables did log something but a generic MS error, something to the effect "job was canceled by user blah blah". There is a process that runs nightly, but again, not worth the time at this point. Until I get to AG, i just have the log reader agent status open and peek at it a few times a day.

    • @odintree5704
      @odintree5704  Рік тому

      @@Bullbluegill I agree - AG is a lot more friendly than replication for a lot of cases.

  • @parvavlogs3722
    @parvavlogs3722 2 роки тому

    OdinTree , could you please provide me database schema , i want to practice thnks

  • @poem-saxophoneTV
    @poem-saxophoneTV Рік тому

    Believe in Jesus, you and your family will be saved.