Home > Software > Data-Warehouse > DataStage
Interview Questions   Tutorials   Discussions   Programs   Discussion   

DataStage - I have source file which contains duplicate data, my requirement is unique data should pass to one file and duplicate data should pass another file how?




2566
views
asked mar November 26, 2014 03:27 AM  

I have source file which contains duplicate data, my requirement is unique data should pass to one file and duplicate data should pass another file how?


           

3 Answers



 
answered By venkat892   0  
hi,

sequential file u give condition as uniq -u and remaining records moves to rej link for duplicates records 
flag   
   add comment

 
answered By durgadash   0  
There are many way to solve this problem. 1 Way also like that.
Flafile---copy---aggregator-----transformor---- join(1) -- target 1 unique join 2 DUP record
flag   
   add comment

 
answered By Mahendra   0  

Job Design

Sequential file stage -> sort stage -> filter stage -> 2 datasets

read the data into sequential file in sort stage make create key change column = True its adds one more column called Key_Change this column identifies duplicates and unique records for duplicate its assign 0 unique 1 based on sort key

filter stage take where keychange= 1 for unique where Keychange= 0 for duplicate records

flag   
   add comment

Your answer

Join with account you already have

FF

Preview


Ready to start your tutorial with us? That's great! Send us an email and we will get back to you as soon as possible!

Alert