0

Ich habe Datastax Enterprise 2.8 auf meinem dev VM (Centos 7) installiert. Die Installation lief reibungslos und der Single-Node-Cluster funktioniert gut. Aber wenn ich versuche, eine Verbindung zum Cluster mit Beeline oder Hive2 JDBC-Treiber herzustellen, erhalte ich einen Fehler wie unten gezeigt. Mein Hauptziel ist es, Tableau mit dem Datastax Enterprise-Treiber oder dem Spark Sql-Treiber zu verbinden.Datastax Verbindung Ausnahme bei Beeline oder Hive2 JDBC Treiber (Tableau)

Fehler beobachtet ist:

ERROR 2016-04-14 17:57:56,915 org.apache.thrift.server.TThreadPoolServer: Error occurred during processing of message. java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Invalid status -128 at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) ~[libthrift-0.9.3.jar:0.9.3] at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269) ~[libthrift-0.9.3.jar:0.9.3] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) [na:1.7.0_99] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) [na:1.7.0_99] at java.lang.Thread.run(Thread.java:745) [na:1.7.0_99] Caused by: org.apache.thrift.transport.TTransportException: Invalid status -128 at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232) ~[libthrift-0.9.3.jar:0.9.3] at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184) ~[libthrift-0.9.3.jar:0.9.3] at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125) ~[libthrift-0.9.3.jar:0.9.3] at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) ~[libthrift-0.9.3.jar:0.9.3] at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) ~[libthrift-0.9.3.jar:0.9.3] at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ~[libthrift-0.9.3.jar:0.9.3] ... 4 common frames omitted ERROR 2016-04-14 17:58:59,140 org.apache.spark.scheduler.cluster.SparkDeploySchedulerBackend: Application has been killed. Reason: Master removed our application: KILLED

My cassandra.yml config:

cluster_name: 'Cluster1'

num_tokens: 256

hinted_handoff_enabled: true hinted_handoff_throttle_in_kb: 1024 max_hints_delivery_threads: 2

batchlog_replay_throttle_in_kb: 1024

authenticator: AllowAllAuthenticator

authorizer: AllowAllAuthorizer

permissions_validity_in_ms: 2000

partitioner: org.apache.cassandra.dht.Murmur3Partitioner

data_file_directories: - /var/lib/cassandra/data

commitlog_directory: /var/lib/cassandra/commitlog

disk_failure_policy: stop

commit_failure_policy: stop

key_cache_size_in_mb:

key_cache_save_period: 14400

row_cache_size_in_mb: 0

row_cache_save_period: 0

counter_cache_size_in_mb:

counter_cache_save_period: 7200

saved_caches_directory: /var/lib/cassandra/saved_caches

commitlog_sync: periodic commitlog_sync_period_in_ms: 10000

commitlog_segment_size_in_mb: 32

seed_provider: - class_name: org.apache.cassandra.locator.SimpleSeedProvider parameters: - seeds: "10.33.1.124"

concurrent_reads: 32 concurrent_writes: 32 concurrent_counter_writes: 32

memtable_allocation_type: heap_buffers

index_summary_capacity_in_mb:

index_summary_resize_interval_in_minutes: 60

trickle_fsync: false trickle_fsync_interval_in_kb: 10240

storage_port: 7000

ssl_storage_port: 7001

listen_address: 10.33.1.124

start_native_transport: true native_transport_port: 9042

start_rpc: true

rpc_address: 10.33.1.124

rpc_port: 9160

rpc_keepalive: true

rpc_server_type: sync

thrift_framed_transport_size_in_mb: 15

incremental_backups: false

snapshot_before_compaction: false

auto_snapshot: true

tombstone_warn_threshold: 1000 tombstone_failure_threshold: 100000

column_index_size_in_kb: 64

batch_size_warn_threshold_in_kb: 64

compaction_throughput_mb_per_sec: 16

compaction_large_partition_warning_threshold_mb: 100

sstable_preemptive_open_interval_in_mb: 50

read_request_timeout_in_ms: 5000 range_request_timeout_in_ms: 10000 write_request_timeout_in_ms: 2000 counter_write_request_timeout_in_ms: 5000 cas_contention_timeout_in_ms: 1000 truncate_request_timeout_in_ms: 60000 request_timeout_in_ms: 10000

cross_node_timeout: false

endpoint_snitch: com.datastax.bdp.snitch.DseSimpleSnitch

dynamic_snitch_update_interval_in_ms: 100 dynamic_snitch_reset_interval_in_ms: 600000 dynamic_snitch_badness_threshold: 0.1

request_scheduler: org.apache.cassandra.scheduler.NoScheduler

server_encryption_options: internode_encryption: none keystore: resources/dse/conf/.keystore keystore_password: cassandra truststore: resources/dse/conf/.truststore truststore_password: cassandra

client_encryption_options: enabled: false optional: false keystore: resources/dse/conf/.keystore keystore_password: cassandra

internode_compression: dc

inter_dc_tcp_nodelay: false

concurrent_counter_writes: 32

counter_cache_size_in_mb:

counter_cache_save_period: 7200

memtable_allocation_type: heap_buffers

index_summary_capacity_in_mb:

index_summary_resize_interval_in_minutes: 60

sstable_preemptive_open_interval_in_mb: 50

counter_write_request_timeout_in_ms: 5000

When connecting with beeline, I get the error:

dse beeline Beeline version 0.12.0.11 by Apache Hive beeline> !connect jdbc:hive2://10.33.1.124:10000 scan complete in 10ms Connecting to jdbc:hive2://10.33.1.124:10000 Enter username for jdbc:hive2://10.33.1.124:10000: cassandra Enter password for jdbc:hive2://10.33.1.124:10000: ********* Error: Invalid URL: jdbc:hive2://10.33.1.124:10000 (state=08S01,code=0) 0: jdbc:hive2://10.33.1.124:10000> !connect jdbc:hive2://10.33.1.124:10000 Connecting to jdbc:hive2://10.33.1.124:10000 Enter username for jdbc:hive2://10.33.1.124:10000: Enter password for jdbc:hive2://10.33.1.124:10000: Error: Invalid URL: jdbc:hive2://10.33.1.124:10000 (state=08S01,code=0) 1: jdbc:hive2://10.33.1.124:10000>

ich ähnliche Fehler zu sehen, wenn sie durch Tableau als auch zu verbinden.

Antwort

2

Der JDBC-Treiber verbindet sich mit dem SparkSql Thrift-Server. Wenn Sie es nicht starten, können Sie keine Verbindung herstellen.

dse spark-sql-thriftserver 
/Users/russellspitzer/dse/bin/dse: 
usage: dse spark-sql-thriftserver <command> [Spark SQL Thriftserver Options] 

Available commands: 
    start        Start Spark SQL Thriftserver 
    stop        Stops Spark SQL Thriftserver