2016-07-23 15 views
1

Ich versuche, mit IBM Watson android Rede in Text-Bibliothek zu arbeiten. Ich bemerkte, dass es eine simliar Frage here aber es wurde keine Antwort gegeben Ich habe alles integriert, sondern als Dienst die erkennen starten tryin es diesen Fehler:Native Bibliothek (com/sun/jna/android-aarch64/libjnidispatch.so) im Ressourcenpfad nicht gefunden

                   java.lang.UnsatisfiedLinkError: Native library (com/sun/jna/android-aarch64/libjnidispatch.so) not found in resource path (.) 
                        at com.sun.jna.Native.loadNativeDispatchLibraryFromClasspath(Native.java:786) 
                        at com.sun.jna.Native.loadNativeDispatchLibrary(Native.java:746) 
                        at com.sun.jna.Native.<clinit>(Native.java:135) 
                        at com.sun.jna.NativeLibrary.<clinit>(NativeLibrary.java:82) 
                        at com.sun.jna.NativeLibrary.getInstance(NativeLibrary.java:337) 
                        at com.ibm.watson.developer_cloud.android.speech_to_text.v1.opus.JNAOpus.<clinit>(JNAOpus.java:42) 
                        at com.ibm.watson.developer_cloud.android.speech_to_text.v1.audio.OggOpusEnc.initEncoderWithUploader(OggOpusEnc.java:53) 
                        at com.ibm.watson.developer_cloud.android.speech_to_text.v1.audio.WebSocketUploader.initStreamAudioToServer(WebSocketUploader.java:113) 
                        at com.ibm.watson.developer_cloud.android.speech_to_text.v1.audio.WebSocketUploader.access$000(WebSocketUploader.java:46) 
                        at com.ibm.watson.developer_cloud.android.speech_to_text.v1.audio.WebSocketUploader$2.run(WebSocketUploader.java:175) 
07-23 12:51:36.854 27413-27757/com.intellidev.mobitranscribe E/AudioRecord: AudioFlinger could not create record track, status: -1 
07-23 12:51:36.874 27413-27757/com.intellidev.mobitranscribe E/AudioRecord-JNI: Error creating AudioRecord instance: initialization check failed with status -1. 
07-23 12:51:36.874 27413-27757/com.intellidev.mobitranscribe E/android.media.AudioRecord: Error code -20 when initializing native AudioRecord object. 
07-23 12:51:36.924 27413-27757/com.intellidev.mobitranscribe E/AudioCaptureThread: Error reading voice audio 
                        java.lang.IllegalStateException: startRecording() called on an uninitialized AudioRecord. 
                         at android.media.AudioRecord.startRecording(AudioRecord.java:943) 
                         at com.ibm.watson.developer_cloud.android.speech_to_text.v1.audio.AudioCaptureThread.run(AudioCaptureThread.java:62) 

My-Code ist hier:

protected void onCreate(Bundle savedInstanceState) { 
    super.onCreate(savedInstanceState); 
    setContentView(R.layout.activity_main); 
    Toolbar toolbar = (Toolbar) findViewById(R.id.toolbar); 
    //setSupportActionBar(toolbar); 

    txtSpeechInput = (TextView) findViewById(R.id.textINput); 
    btnSpeak = (ImageButton) findViewById(R.id.btnSpeak); 
    btnSpeak.setOnClickListener(new View.OnClickListener() { 

     @Override 
     public void onClick(View v) { 
      //promptSpeechInput(); 
      //set(); 
      SpeechToText.sharedInstance().recognize(); 
     } 
    }); 
    btnEnd = (ImageButton) findViewById(R.id.btnEnd); 
    btnEnd.setOnClickListener(new View.OnClickListener() { 
     @Override 
     public void onClick(View view) { 
      SpeechToText.sharedInstance().stopRecording(); 

     } 
    }); 


    DrawerLayout drawer = (DrawerLayout) findViewById(R.id.drawer_layout); 
    ActionBarDrawerToggle toggle = new ActionBarDrawerToggle(
      this, drawer, toolbar, R.string.navigation_drawer_open, R.string.navigation_drawer_close); 
    drawer.setDrawerListener(toggle); 
    toggle.syncState(); 

    NavigationView navigationView = (NavigationView) findViewById(R.id.nav_view); 
    navigationView.setNavigationItemSelectedListener(this); 

    // Configuration 
    SpeechConfiguration sConfig = new SpeechConfiguration(SpeechConfiguration.AUDIO_FORMAT_OGGOPUS); 
    // STT 
    SpeechToText.sharedInstance().initWithContext(this.getHost("https://gateway.watsonplatform.net/conversation/api"), this.getApplicationContext(), sConfig); 
    SpeechToText.sharedInstance().setCredentials("PERSONAL","INFO"); 
    SpeechToText.sharedInstance().setDelegate(this); 

Antwort

0

I benötigt, um meinen Code wie unten zu verwenden, der von der Nuance-Probe

Session session = Session.Factory.session(this, com.intellidev.mobitranscribe.Configuration.SERVER_URI, 
       com.intellidev.mobitranscribe.Configuration.APP_KEY); 

     Transaction.Options options = new Transaction.Options(); 
     options.setRecognitionType(RecognitionType.DICTATION); 
     options.setDetection(DetectionType.Long); 
     options.setLanguage(new Language("ENG-USA")); 

     //Start listening 
     recoTransaction = speechSession.recognize(options, recoListener); 
erhalten wird