- 98
- 440 492
ranjan kumar mahapatra
Приєднався 28 гру 2007
6.02 Generate the parameter file dynamically from table for multiple mappings using IICS #iics
Generate the parameter file dynamically from table for multiple mappings using IICS
Table:
CREATE TABLE "SCOTT"."SRC_PARAM1"
( ID NUMBER(*,0),
PARAM_NAME VARCHAR2(2000),
PARAM_VAL VARCHAR2(2000)
)
Sample Insert Statement:
insert into SRC_PARAM1
values
(5,'mt_m_src_stg_emp3'
'[Demo_Project].[Scenarios].[mt_m_src_stg_emp2]
$$DEPTNO=40
$$Country_CD=IND'
)
SQ:
select param_name,param_val from src_param1 order by ID
Expression:
decode(true,
param_name='USE_SECTIONS', param_val||chr(10),
param_name != 'USE_SECTIONS' or param_name != 'GLOBAL' ,param_val||chr(10),
param_name='GLOBAL',param_val,
NULL
)
Table:
CREATE TABLE "SCOTT"."SRC_PARAM1"
( ID NUMBER(*,0),
PARAM_NAME VARCHAR2(2000),
PARAM_VAL VARCHAR2(2000)
)
Sample Insert Statement:
insert into SRC_PARAM1
values
(5,'mt_m_src_stg_emp3'
'[Demo_Project].[Scenarios].[mt_m_src_stg_emp2]
$$DEPTNO=40
$$Country_CD=IND'
)
SQ:
select param_name,param_val from src_param1 order by ID
Expression:
decode(true,
param_name='USE_SECTIONS', param_val||chr(10),
param_name != 'USE_SECTIONS' or param_name != 'GLOBAL' ,param_val||chr(10),
param_name='GLOBAL',param_val,
NULL
)
Переглядів: 285
Відео
6.01 Generate the parameter file dynamically from table using IICS
Переглядів 3532 місяці тому
Generate the parameter file dynamically from table using IICS Table : CREATE TABLE "SCOTT"."SRC_PARAM" ( USE_SECTIONS VARCHAR2(20), Project_Name VARCHAR2(200), Folder_Name VARCHAR2(200), Task_Name VARCHAR2(200), DEPTNO VARCHAR2(20), GLOBAL VARCHAR2(20), CONN_SRC VARCHAR2(200), CONN_TGT VARCHAR2(200) ) Data in the table: #USE_SECTIONS [Demo_Project] [Scenarios] [mt_m_src_stg_emp] $$DEPTNO=30 [Gl...
12.2 Mapping to implement SCD Type1 using SQL and IICS #iics
Переглядів 1772 місяці тому
Mapping to implement SCD Type1 using SQL and IICS SQL: select Tgt_SK ,EMPNO ,ENAME ,JOB ,MGR ,HIREDATE ,SAL ,COMM ,DEPTNO ,Created_date ,Updated_date ,Delete_Flag ,stg_hash_key ,tgt_hash_key ,case when Tgt_SK is null then 'I' when Tgt_SK is not null and stg_hash_key!=tgt_hash_key then 'U' end as Ins_Upd_Flag from ( select tgt.SK as Tgt_SK ,stg.EMPNO ,stg.ENAME ,stg.JOB ,stg.MGR ,stg.HIREDATE ,s...
12.1 Mapping to create Header Detail and Footer record file format in IICS #headerfooter
Переглядів 1712 місяці тому
12.1 Mapping to create Header Detail and Footer record file format in IICS
11.1 IICS Snowflake - Oracle to Snowflake load in IICS #iics #snowflake #idmc
Переглядів 4747 місяців тому
In this video I have explained how to create the snowflake connection in IICS and load records from oracle to Snowflake table.
3.5.1 Shared Sequence in IICS #iics
Переглядів 1,9 тис.7 місяців тому
Shared Sequnce: 1. Reusable sequence which can be reused in multiple sequence generator transformations in Informatica cloud Steps : 1. Create the shared sequence and use it inside the sequence generator transformation.
3.11.1 Dynamic lookup transformation in IICS #iics
Переглядів 1,9 тис.8 місяців тому
Dynamic Lookup in Informatica Cloud : 1. In Static cache , the data in the lookup cache doesn’t change when the mapping task runs 2. But in case of Dynamic lookup , the cache gets updated as per the changes when the mapping task runs. 3. If you enable Dynamic cache, by default one new return port "NewLookupRow" will be included 4. The value for NewLookupRow can be 0, 1 or 2 . 5. NewLookupRow va...
6.7 Modify the mapping where database name and table names are parameterized in IICS #iics
Переглядів 6018 місяців тому
How to modify the parameterized mapping, in which database and tables are parameterized ? Step 1: Once the tables / files are updated(may be column data type updated, column added/ removed etc..) in the database or in the source directory, go to the mapping, in place of parameter , provide the hard coded database and tables/ file names Step 2: Refresh the table / file in source object. Step 3: ...
3.17 Normalizer Transformation in IICS DI #iics #informatica #normalization
Переглядів 1,6 тис.8 місяців тому
In this video I will explain how to work on Normalizer Transformation in IICS DI
PySpark Data Bricks Syntax Cheat Sheet #pyspark #python #databricks
Переглядів 3908 місяців тому
PySpark Syntax Cheat Sheet: I have covered the below operators/function from PySpark: 1. Drop table if already present 2. Create table 3. Insert records into the table 4. Display the records from the table 5. Display selected columns from the table 6. Order By clause to display records in ascending order 7. Order By clause to display records in descending order 8. Select top N number records fr...
SparkSQL Formatting Dates and Numbers in Databricks #SparkSQL #Databricks #sql
Переглядів 2149 місяців тому
SparkSQL Formatting Dates and Numbers in Databricks Code: %md # I will explain basic formatting of Dates and Numbers using SparkSQL in Databricks DROP TABLE IF EXISTS colleges; CREATE TABLE colleges USING CSV OPTIONS ( PATH '/databricks-datasets/COVID/covid-19-data/colleges/colleges.csv', header = 'true', inferschema ='true') ; DESCRIBE colleges; select * from colleges; remove decimals select 1...
SparkSQL Date function in Databricks #SparkSQL #databricks
Переглядів 1199 місяців тому
SparkSQL Date function in Databricks
SparkSQL String Formatting in Databricks #sparksql #databricks
Переглядів 909 місяців тому
String Formatting with SparkSQL in Databricks
SparkSQL Aggregation Function in Databricks #sparksql #databricks
Переглядів 1479 місяців тому
Aggregation Function with Spark SQL in Databricks
SparkSQL Arithmetic Calculations in Databricks #sparksql #databricks
Переглядів 1109 місяців тому
Arithmetic Calculations using Spark SQL in Databricks
SparkSQL Filter in Databricks #sparksql #databricks
Переглядів 1769 місяців тому
SparkSQL Filter in Databricks #sparksql #databricks
Basic Spark SQL in Databricks #sparksql #databricks
Переглядів 2249 місяців тому
Basic Spark SQL in Databricks #sparksql #databricks
Watering Anthurium Plants with Guava Juice #anthurium #anthuriumplant
Переглядів 1599 місяців тому
Watering Anthurium Plants with Guava Juice #anthurium #anthuriumplant
Watering Anthurium Plants with Apple Juice #anthurium #anthuriumplant
Переглядів 1429 місяців тому
Watering Anthurium Plants with Apple Juice #anthurium #anthuriumplant
SparkSQL Create a database, create a table , insert records in Databricks #sparksql #databricks
Переглядів 4719 місяців тому
SparkSQL Create a database, create a table , insert records in Databricks #sparksql #databricks
1.3 HASH Functions MD5 SHA1 SHA256 SHA512 in GCP BigQuery #bigquery #AnalyticalFunctions #MD5 #HASH
Переглядів 2089 місяців тому
1.3 HASH Functions MD5 SHA1 SHA256 SHA512 in GCP BigQuery #bigquery #AnalyticalFunctions #MD5 #HASH
1.2 RANK and DENSE_RANK Analytical function in GCP BigQuery #BigQuery #analyticalfunctions #rank
Переглядів 789 місяців тому
1.2 RANK and DENSE_RANK Analytical function in GCP BigQuery #BigQuery #analyticalfunctions #rank
1.1 ROW_NUMBER Analytical Function in GCP BigQuery #BigQuery #analyticalfunctions
Переглядів 889 місяців тому
1.1 ROW_NUMBER Analytical Function in GCP BigQuery #BigQuery #analyticalfunctions
Watering Anthurium plant with Carrot juice #anthurium
Переглядів 43710 місяців тому
Watering Anthurium plant with Carrot juice #anthurium
1.1 Basic SOQL (Salesforce object query language) Select Queries using WorkBench #soql #salesforce
Переглядів 240Рік тому
1.1 Basic SOQL (Salesforce object query language) Select Queries using WorkBench #soql #salesforce
10.2 IICS or ETL Effort Estimation & Resource Loading (Part 2)
Переглядів 1,2 тис.Рік тому
10.2 IICS or ETL Effort Estimation & Resource Loading (Part 2)
6.6 Parameterize the Pushdown Optimization & Session Log directory and session log file name in IICS
Переглядів 2,1 тис.Рік тому
6.6 Parameterize the Pushdown Optimization & Session Log directory and session log file name in IICS
10.1 IICS or ETL Effort Estimation & Resource Loading (Part 1)
Переглядів 1,7 тис.Рік тому
10.1 IICS or ETL Effort Estimation & Resource Loading (Part 1)
9.1 Load a file from local folder to GCS , GCS to GCS , GCS to GBQ in IICS DI
Переглядів 1,2 тис.Рік тому
9.1 Load a file from local folder to GCS , GCS to GCS , GCS to GBQ in IICS DI
7.4 Load records into two Salesforce Objects
Переглядів 1,1 тис.Рік тому
7.4 Load records into two Salesforce Objects
Google Storage Key and service key are different?
nice playlist for beginner
Could you please suggest expression to change SQL SERVER DATE to Salesforce date format
Hi sir by mistake I have created trial account by two email id with the same password and install agent 64 and secure agent by one email id but when i tried to open Informatica cloud it is not opening. it is showing user name or password is not valid when i tried to reset the password then it is calling me to register account. Please guide me
@@aabyaz5124 you can create another account using one email id with different user name, then install secure agent again and try.
Hii.....how to trigger tommorow files automatically...for example I have a file called 07-11-2024_orders.csv and run this today's file and scheduled for tommorow...and tommorow automatically triggered tommorows file like 08-11-2024_orders.csv...how to solve this..this was asked in an interview
If for the a source for the same value status_id is coming as 'Y' and in the other source status_id coming as 'N' then what will be generated in the target after union ?
@@RiyaDhar-v3u in that case you will have value Y for one record and value N for other record.
Hi sir please make videos real time projects how to work can you explain
Hi Ranjan sir Please let me know how to migrate the job from datastage to iics
@@anilanche5753 hi, you need to redevelop the datastage jobs in iics based on the logic implemeted in datastage.
@@itsranjan2003 we don't have any migration tool ? And can we do export and import options?
@@anilanche5753 I have not heard about any migration tool. Pls raise a informatica global support ticket, they may suggest if they have any such tool available.
Export and import will work for powercenter and iics code only..
@@itsranjan2003 okay I will do it sir, in the mean while if you get anything please let me know sir
Voice is very low...I had to turn on full volume.
Iics admin activities pls explain
Thank you sir
very informative, can u pls continue more videos on this topic
sir, can you help me with sequence generator. i have mapping with three targets. i want to generate same sequence numbers for three targets. How do i achieve it. i have tried but couldn't get it.
@@PradeepKanaparthy use expression after sequence generator and map the next val to targets
@@itsranjan2003 i connected seq_gen -> expresion -> tgt1 seq_gen -> expresion -> tgt2 src- > expr -> router -> tgt1 and tgt1 are connected. please suggest me how to achieve it.
sir, can you help me with sequence generator. i have mapping with three targets. i want to generate same sequence numbers for three targets. How do i achieve it. i have tried but couldn't get it.
@@PradeepKanaparthy add one expression transformation after sequence generator, map nextval to expression, from expression map the field to those three targets.
When I choose the target, the salesforce connection doesn't appear in the target's connection options, it only appears in the source. [I can only choose a flat file target] Can you help?
What's difference between variable field, output field
Hi, I'm trying to run a mapping that updates to three different target tables. The issue is that the 3 have different columns, I just need to update 1 column on the 3 so it seems I can't use dynamic mapping tasks, what do you suggest?
Hello sir, I would like to migrate multiple tables from terradata into s3 but while selecting source as a multiple i am not able to select all the tables it's only getting selected one table. I have a requirement of migrating multiple tables through only one mapping task . Can you please help me sir.
@@mdsaad9190-l2e hi, please try replication task..
Hi Ranjan, I am trying to create a similar process. But my requirement is I will pass the parameter values while executing a taskflow. I am seeing the job always takes the values assigned in task (I have checked the run time override option).
Hi, have you provided the param file path and param file name in taskflow-->data task--> input fields. Pls check. It should work.
All your vudeos are very simple and very very informative
@@rsmchittithalli7334 thanks
Dear Sir: My connection is similar to applied but it does not work. Please help.Thanks.
If you have created the service account correctly and downloaded the json file and provided the required details correctly while creating the connection , it should work fine. Seems you might have not provided the details correctly while creating the connection.
Very informative
Very nice and informative. Please take care of the background noise
Very helpful
@@rsmchittithalli7334 thanks
Very informative
Very helpful sir
Can you please share this in one pdf or word ?
Hi sir make videos iics informatica
error: The DTM process terminated unexpectedly. Contact Informatica Global Customer Support. Can you please me on this?
what if I have a source query of the type "select * from schema.table" and I want this schema to be paramterized? I have schema A for dev env and schema b for qat. I am trying to define input param with type as string and but it's not working
for schema, define parameter type as "connection" and for tables define parameter type as "object" while creating the input parameters.
Thank you for sharing . What benefits it is providing when we compare it with Dataloader ? How it is behaving in bulk number of records like 500k around ? Is this taking any batch count , like only x number of records can be inserted in one go ? All the automations like Triggers or Validation rules will work if i follow this process to update some records ? Suppose Dataloader is taking 5 hours to load 500k records with batch count 50(to avoid too many SOQL exeception keeping the respective trigger on) , Do you think This tool will work better in this scenario ?
yes, you need to define the batch size at target properties based on which the number of records will be inserted in one go. Regarding trigger and validation rules, informatica will work similar like data loader as these rules are affected only after the records are inserted into the target sfdc objects. If you don't want all fields to be updated as part of upsert functionalities, you have the option to insert records in one flow and update the object by mapping the required fields in another flow. Using Informatica, you have the option to transform the data( join multiple objects, filter records, aggregate data etc..) in a better way in comparison to data loader. Regarding the
Sir how to compare column and data type of source before executing mapping based on comparison results
to verify the column and data types you need to use the hard coded db/file connection and tables, then only you will be able to see the table column and data types. Pls refer my video 6.7
I saw that, it is related to parameterization. All I want to extract all the columns and their data type of source and compare with previous days same source structure Any function which can extract columns name and data type in comma separated list you know can be used
Hi @itsranjan2003, while Parameterize the Source and Target Tables names , in the parameter file can we keep multiple source and target table names
@@kieees_7601 you can keep first parameter with first paramter value. 2nd paramter with 2nd paramter value etc..
@@itsranjan2003 for example I am running Two mappings each has different source tables and different target tables and I can use one single parameter file ? In order to mention two different source tables and two different target tables from different mapping, the same single parameter file ? Like first_src_table= name First_tgt_table =name Above for first mapping. Second _src_table= name Second_tgt_table =name Above for second mapping?
@@kieees_7601 yes, you can use this..
Sir: I appreciate your teaching method which has easy understanding and implementations specifically for the beginners. Muhammad Musa Khan, North America.
Thank you.
Sound very low for this
Hi ,when im uploading flate file the fields are showing in horizontal how to convert it into veritcal way please help me on this
pls check if you have selected the field delimiter properly,
Thanks @itsranjan2003 , but want to check one thing . Scenario : I want to load 200 multiple source tables from MYsql DB to 200 multiple target tables on snowflake DB using just one single mapping and not do any transformations and run that mapping everyday. Just run that mapping everyday morning and do CDC on source data to target data. How is that possible,any idea?
please try with replication task , multiple source and target tables can be used in this task.(my video section 4.3 ).
@@itsranjan2003 thank you
@@itsranjan2003 is it also possible to have the source tables coming from different databases and different schemas within the databases and achieve that with single Replication task ? Or should we create separate replication task for database and schema specific? Note: But, target database and schema is constant for all the different sources databases
@@kieees_7601 I think it will allow tables from one database only in one replication task. for multiple databases and schemas, you may need to use multiple replication tasks.
@@kieees_7601 i think mass ingestion technique can be applied from IICS like from s3 to snowflake staging we need to run taskflow with help of mass ingestion . so it snowflake schema get load everyday basis .
The voice is not there in this video
Yes, please refer the steps..
@@itsranjan2003 Thank you for your efforts Ranjan!!
Bhai awaj nhi h video mei
from where we can get access key and secret key? and whats inside service account json file?
it will have project_id, private_key , client_email etc .. please refer the URL for generating service account developers.google.com/workspace/guides/create-credentials#service-account
How to get the account name
after you login to snowflake, in left side, click on Admin-->Accounts. Then scroll the cursor to the account name , you will find one notification symbol, click on that , then you will see one URL, from the URL, copy the part which is there after "" and before ".snowflakecomputing.com". For example : if vabfghj-oiujk876767.snowflakecomputing.com is the URL, account will be vabfghj-oiujk876767
@@itsranjan2003 Thanks !
Hi Ranjan, I need your help. Can I connect with you? It looks like you have good experience in Parameter and I need your expertise. Can pay for your service. I really appreciate it.
How to use in operator in filter Transformation
Use OR operator..
Nice explanation
Nice video
what is your email to contact you
Hi sir, if we paramterize source tranformation with connection paramter, object parameter, then incoming field won't be there then how can we transform the data in this case if we want to change date format? is there any we can do both paratmerize and transform the data?
use hard coded connection/object, then update the field, then parameterize the connection / object.
@@itsranjan2003 I still don't understand how this will work. When I am hardcoding and updating the fields everything seems fine at target field mapping but when I parametrize source, the target shows some fields unavailable for field mapping.
@@ShaileshNavghare-n6x pls watch my video in section 6.7
Hi sir, if we paramterize source tranformation with connection paramter, object parameter, then incoming field won't be there then how can we transform the data in this case if we want to change date format? is there any we can do both paratmerize and transform the data?
use the hard coded connection value, update the date format, validate it, then parameterize the connection/object again.
@@itsranjan2003 super sir. It worked! Thank you
Hi, if i want to add tranformation logic in parameterized mapping, how will we add?
use hard coded connection/object, then update the field, then parameterize the connection / object.
Why you made two connections? src and tgt? What do they mean? What will this do?
Best practice is to create separate connections for source and target databases or file path.