Ich versuche, meine Anwendung zu erweitern, um eine andere Cassandra Tabelle für die Speicherung der Transaktionen enthalten in jedem Block enthalten.Fehler: Phantom-dsl BatchQuery nicht mit überladener Methode
Ich habe versucht, die Code-Snippets prägnant und relevant zu halten. Wenn ein weiterer Code-Kontext erforderlich ist, lassen Sie es mich wissen.
phantomVersion = "1.22.0" cassandraVersion = "2.1.4"
Ich erhalte die folgende Zusammenstellung Fehler mit dem Code unten aufgeführt. Einblicke sehr geschätzt.
[error] /home/dan/projects/open-blockchain/scanner/src/main/scala/org/dyne/danielsan/openblockchain/data/database/Database.scala:30: overloaded method value add with alternatives:
[error] (batch: com.websudos.phantom.batch.BatchQuery[_])com.websudos.phantom.batch.BatchQuery[com.websudos.phantom.builder.Unspecified] <and>
[error] (queries: Iterator[com.websudos.phantom.builder.query.Batchable with com.websudos.phantom.builder.query.ExecutableStatement])(implicit session: com.datastax.driver.core.Session)com.websudos.phantom.batch.BatchQuery[com.websudos.phantom.builder.Unspecified] <and>
[error] (queries: com.websudos.phantom.builder.query.Batchable with com.websudos.phantom.builder.query.ExecutableStatement*)(implicit session: com.datastax.driver.core.Session)com.websudos.phantom.batch.BatchQuery[com.websudos.phantom.builder.Unspecified] <and>
[error] (query: com.websudos.phantom.builder.query.Batchable with com.websudos.phantom.builder.query.ExecutableStatement)(implicit session: com.datastax.driver.core.Session)com.websudos.phantom.batch.BatchQuery[com.websudos.phantom.builder.Unspecified]
[error] cannot be applied to (scala.concurrent.Future[com.datastax.driver.core.ResultSet])
[error] .add(ChainDatabase.bt.insertNewBlockTransaction(bt))
[error] ^
[error] one error found
[error] (compile:compileIncremental) Compilation failed
[error] Total time: 6 s, completed Aug 9, 2016 2:42:30 PM
GenericBlockModel.scala:
case class BlockTransaction(hash: String, txid: String)
sealed class BlockTransactionModel extends CassandraTable[BlockTransactionModel, BlockTransaction] {
override def fromRow(r: Row): BlockTransaction = {
BlockTransaction(
hash(r),
txid(r)
)
}
object hash extends StringColumn(this) with PartitionKey[String]
object txid extends StringColumn(this) with ClusteringOrder[String] with Descending
}
abstract class ConcreteBlockTransactionModel extends BlockTransactionModel with RootConnector {
override val tableName = "block_transactions"
def insertNewBlockTransaction(bt: BlockTransaction): Future[ResultSet] = insertNewRecord(bt).future()
def insertNewRecord(bt: BlockTransaction) = {
insert
.value(_.hash, bt.hash)
.value(_.txid, bt.txid)
}
}
Database.scala
class Database(val keyspace: KeySpaceDef) extends DatabaseImpl(keyspace) {
def insertBlock(block: Block) = {
Batch.logged
.add(ChainDatabase.block.insertNewRecord(block))
.future()
}
def insertTransaction(tx: Transaction) = {
Batch.logged
.add(ChainDatabase.tx.insertNewTransaction(tx))
.future()
}
def insertBlockTransaction(bt: BlockTransaction) = {
Batch.logged
.add(ChainDatabase.btx.insertNewBlockTransaction(bt))
.future()
}
object block extends ConcreteBlocksModel with keyspace.Connector
object tx extends ConcreteTransactionsModel with keyspace.Connector
object btx extends ConcreteBlockTransactionsModel with keyspace.Connector
}
object ChainDatabase extends Database(Config.keySpaceDefinition)
Batches langsam sind für Multi-Partitionsoperationen. Siehe auch https://inoio.de/blog/2016/01/13/cassandra-to-batch-or-not-to-batch/ – mmatloka