2017-03-25 2 views
0

공개 소스 hadoop 인터페이스 인 HVPI를 사용하여 Hadoop 및 MapReduce (완전 분산 모드)를 사용하여 비디오를 처리했습니다. 비디오를 프레임으로 분할하고이 프레임을 사용하여 Xuggler API로 새로운 비디오를 만들고 싶습니다.Hadoop 사용자 정의 출력 RecordWriter 오류

지도 단계가 완료되었지만 감소 단계가 발생하여 java.lang.RuntimeException: error Operation not allowed이 발생했습니다. 마스터 노드 디렉토리에서 새 비디오를 만들기 때문에 HDFS에서 어떻게 작동하는지 모르기 때문입니다.

17/03/25 08:07:12 INFO client.RMProxy: Connecting to ResourceManager at evoido/192.168.25.11:8032 
17/03/25 08:07:13 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir 
17/03/25 08:07:13 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this. 
17/03/25 08:29:50 INFO input.FileInputFormat: Total input paths to process : 1 
17/03/25 08:29:51 INFO mapreduce.JobSubmitter: number of splits:1 
17/03/25 08:29:51 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1490439401793_0001 
17/03/25 08:29:52 INFO impl.YarnClientImpl: Submitted application application_1490439401793_0001 
17/03/25 08:29:52 INFO mapreduce.Job: The url to track the job: http://evoido:8088/proxy/application_1490439401793_0001/ 
17/03/25 08:29:52 INFO mapreduce.Job: Running job: job_1490439401793_0001 
17/03/25 08:30:28 INFO mapreduce.Job: Job job_1490439401793_0001 running in uber mode : false 
17/03/25 08:30:28 INFO mapreduce.Job: map 0% reduce 0% 
17/03/25 08:30:52 INFO mapreduce.Job: map 100% reduce 0% 
17/03/25 08:30:52 INFO mapreduce.Job: Task Id : attempt_1490439401793_0001_m_000000_0, Status : FAILED 
17/03/25 08:30:54 INFO mapreduce.Job: map 0% reduce 0% 
17/03/25 08:37:40 INFO mapreduce.Job: map 68% reduce 0% 
17/03/25 08:37:43 INFO mapreduce.Job: map 69% reduce 0% 
17/03/25 08:37:52 INFO mapreduce.Job: map 73% reduce 0% 
17/03/25 08:38:30 INFO mapreduce.Job: map 82% reduce 0% 
17/03/25 08:39:26 INFO mapreduce.Job: map 100% reduce 0% 
17/03/25 08:40:36 INFO mapreduce.Job: map 100% reduce 67% 
17/03/25 08:40:39 INFO mapreduce.Job: Task Id : attempt_1490439401793_0001_r_000000_0, Status : FAILED 
Error: java.lang.RuntimeException: error Operação não permitida, failed to write trailer to /home/idobrt/Vídeos/Result/ 
     at com.xuggle.mediatool.MediaWriter.close(MediaWriter.java:1306) 
     at ads.ifba.edu.tcc.util.MediaWriter.close(MediaWriter.java:97) 
     at edu.bupt.videodatacenter.input.VideoRecordWriter.close(VideoRecordWriter.java:61) 
     at org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.close(ReduceTask.java:550) 
     at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:629) 
     at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389) 
     at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164) 
     at java.security.AccessController.doPrivileged(Native Method) 
     at javax.security.auth.Subject.doAs(Subject.java:422) 
     at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) 
     at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) 

이 내 VideoRecordWriter 구현입니다 : VIDEO_NAME는 네임 노드 로컬 디렉토리이기 때문에

public class VideoRecordWriter extends RecordWriter<Text, ImageWritable>{ 

    private FileSystem fs; 


    @Override 
    public void close(TaskAttemptContext job) throws IOException, InterruptedException { 

     // TODO Auto-generated method stub 
     Path outputPath = new Path(job.getConfiguration().get("mapred.output.dir")); 
     Configuration conf = job.getConfiguration(); 
     fs = outputPath.getFileSystem(conf); 

     MediaWriter.initialize().close(); 
     //fs.copyFromLocalFile(new Path(MediaWriter.initialize().getVideoPath()), outputPath); 
     fs.close(); 

    } 

    @Override 
    public void write(Text key,ImageWritable img) throws IOException, InterruptedException { 
     // TODO Auto-generated method stub 
     //System.out.println("Key value: "+key.toString()); 

     MediaWriter.initialize().setDimentions(img.getBufferedImage()); 
     MediaWriter.initialize().creaVideoContainer();   
     MediaWriter.initialize().create(img.getBufferedImage()); 


    } 




} 


    public class MediaWriter{ 


     private MediaWriter(){ 

     } 

     public static MediaWriter initialize() throws IOException{ 

      if(instance == null){ 
       instance = new MediaWriter(); 

       /* 
       fs = FileSystem.get(new Configuration()); 
       outputStream = fs.create(new Path("hdfs://evoido:9000/video/teste.mp4")); 
       containerFormat = IContainerFormat.make(); 
       containerFormat.setOutputFormat("mpeg4", null, "video/ogg"); 

       writer.getContainer().setFormat(containerFormat); 
       writer = ToolFactory.makeWriter(XugglerIO.map(outputStream)); 
       */ 

      } 
      return instance; 
     } 

     public void setDimentions(BufferedImage img){ 

      if((WIDTH==0)&&(HEIGHT==0)){ 
      WIDTH = img.getWidth(); 
      HEIGHT = img.getHeight(); 
      } 
     } 

     public void setFileName(Text key){ 

      if(fileName==null){ 
      fileName = key.toString(); 
      VIDEO_NAME += fileName.substring(0, (fileName.lastIndexOf("_")-4))+".mp4"; 
      } 
     } 

     public void creaVideoContainer() throws IOException{ 

      if(writer ==null){ 
      writer = ToolFactory.makeWriter(VIDEO_NAME); 
       /* 
       fs = FileSystem.get(new Configuration()); 
       outputStream = fs.create(new Path("hdfs://evoido:9000/video/teste.mp4")); 
       containerFormat = IContainerFormat.make(); 
       containerFormat.setOutputFormat("mpeg4", null, "video/ogg"); 
       */ 
      writer.getContainer().setFormat(containerFormat); 

      writer.addVideoStream(0, 0, ICodec.ID.CODEC_ID_MPEG4,WIDTH,HEIGHT); 

      } 
     } 
     public void create(BufferedImage img) { 
      // TODO Auto-generated method stub 
      //precisamos descobrir como setar o timeStamp corretamente 
      if(offset == 0){ 
       offset = calcTimeStamp(); 


      } 


      writer.encodeVideo(0,img,timeStamp, TimeUnit.NANOSECONDS); 
      timeStamp+=offset; 


     } 


     public void close() { 
      // TODO Auto-generated method stub 
      writer.close(); 
     } 

     public String getVideoPath(){ 
      return VIDEO_NAME; 
     } 
     public void setTime(long interval){ 
      time+= interval; 
     } 


     public void setQtdFrame(long frameNum){ 
      qtdFrame = frameNum; 
     } 
     /* 

     * */ 
     public long calcTimeStamp(){ 

      double interval = 0.0; 
      double timeLong = Math.round(time/CONST); 
      double result = (time/(double)qtdFrame)*1000.0; 
      /* 

      */ 
      if((timeLong > 3600)&&((time % qtdFrame)!=0)){ 
       interval = 1000.0; 
       double overplus = timeLong/3600.0; 
       if(overplus >=2){ 
        interval*=overplus; 
       } 
       result+=interval; 
      } 

      return (long)Math.round(result); 
     } 

     public void setFramerate(double frameR){ 
      if(frameRate == 0){ 
       frameRate = frameR; 
      } 
     } 


     private static IMediaWriter writer; 
     private static long nextFrameTime = 0; 
     private static FileSystem fs; 
     private static OutputStream outputStream; 
     private static MediaWriter instance; 
     private static IContainerFormat containerFormat; 
     private static String VIDEO_NAME = "/home/idobrt/Vídeos/Result/"; 
     private static int WIDTH =0; 
     private static int HEIGHT= 0; 
     private static String fileName = null; 
     private static long timeStamp = 0; 
     private static double time = 0; 
     private static long qtdFrame = 0; 
     private static long offset = 0; 
     private static long startTime = 0; 
     private static double frameRate = 0; 
     private static double CONST = 1000000.0; 
     private static double INTERVAL = 1000.0; 
    } 

문제는 바로 writer = ToolFactory.makeWriter(VIDEO_NAME);입니다. 올바른 방법을 알고있는 사람이 있습니까? 올바른 방법은 파일을 HDFS에 쓰는 것입니다. 작업이 jobLocalRunner에서 실행되면 작동하지만 병렬 처리가 중단됩니다.

답변

0

지금은 단지 파일을 datanode (reduce fase가 실행되는 곳)에 저장하고이 파일을 HDFS에 복사하십시오. 그것은 최선의 해결책은 아니지만 지금은 효과가 있습니다.