sealed abstract case class AnalyzerPipe[F[_]] extends Product with Serializable
AnalyzerPipe provides methods to tokenize a possibly very long Stream[F, String]
or Stream[F, Byte], such as from a file. When possible, prefer starting with a
Stream[F, Byte] and use tokenizeBytes.
- Source
- AnalyzerPipe.scala
- Alphabetic
- By Inheritance
- AnalyzerPipe
- Serializable
- Product
- Equals
- AnyRef
- Any
- Hide All
- Show All
- Public
- Protected
Value Members
- final def !=(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def ##: Int
- Definition Classes
- AnyRef → Any
- final def ==(arg0: Any): Boolean
- Definition Classes
- AnyRef → Any
- final def asInstanceOf[T0]: T0
- Definition Classes
- Any
- def clone(): AnyRef
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.CloneNotSupportedException]) @native()
- final def eq(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- def finalize(): Unit
- Attributes
- protected[lang]
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.Throwable])
- final def getClass(): Class[_ <: AnyRef]
- Definition Classes
- AnyRef → Any
- Annotations
- @native()
- final def isInstanceOf[T0]: Boolean
- Definition Classes
- Any
- final def ne(arg0: AnyRef): Boolean
- Definition Classes
- AnyRef
- final def notify(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- final def notifyAll(): Unit
- Definition Classes
- AnyRef
- Annotations
- @native()
- def productElementNames: Iterator[String]
- Definition Classes
- Product
- val readerF: (Reader) => Resource[F, TokenGetter]
- final def synchronized[T0](arg0: => T0): T0
- Definition Classes
- AnyRef
- def tokenizeBytes(in: Stream[F, Byte], tokenN: Int): Stream[F, String]
Emits a string for every token, as determined by the Analyzer, in the input stream.
Emits a string for every token, as determined by the Analyzer, in the input stream. Decoding from bytes to strings is done using the default charset.
- in
input stream to tokenize
- tokenN
maximum number of tokens to read at a time
- def tokenizeStrings(in: Stream[F, String], tokenN: Int): Stream[F, String]
Emits a string for every token, as determined by the Analyzer, in the input stream.
Emits a string for every token, as determined by the Analyzer, in the input stream. A space is inserted between each element in the input stream to avoid accidentally combining words. See
tokenizeStringsRawto avoid this behaviour.- in
input stream to tokenize
- tokenN
maximum number of tokens to read at a time
- def tokenizeStringsRaw(in: Stream[F, String], tokenN: Int): Stream[F, String]
Emits a string for every token, as determined by the Analyzer, in the input stream.
Emits a string for every token, as determined by the Analyzer, in the input stream. Becareful, the end of one string will be joined with the beginning of the next in the Analyzer. See
tokenizeStringsto automatically intersperse spaces.- in
input stream to tokenize
- tokenN
maximum number of tokens to read at a time
- final def wait(): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long, arg1: Int): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException])
- final def wait(arg0: Long): Unit
- Definition Classes
- AnyRef
- Annotations
- @throws(classOf[java.lang.InterruptedException]) @native()