score:4
Here's a solution using tuples (Scastie):
opaque type CheckedMap <: Map[Connector, Int] = Map[Connector, Int]
type Contains[E, T <: Tuple] <: Boolean = T match {
case EmptyTuple => false
case h *: t =>
h match {
case (E, _) => true
case _ => Contains[E, t]
}
}
type ContainsAll[S <: Tuple, T <: Tuple] = S match {
case EmptyTuple => DummyImplicit
case h *: t =>
Contains[h, T] match {
case true => ContainsAll[t, T]
case false => Nothing
}
}
type AllConnectors = (
Connector.CHAdeMO.type,
Connector.Mennekes.type,
Connector.CCS.type,
Connector.Tesla.type
)
def checkedMap[T <: Tuple](t: T)(using
@annotation.implicitNotFound(
"Not all Connector types given."
) c: ContainsAll[AllConnectors, T]
): CheckedMap = t.toList.asInstanceOf[List[(Connector, Int)]].toMap
This will take a tuple of tuples (Connector, Int)
and check if it contains all the types in another tuple of all Connector
types. If the input contains all the connector types at least once, it looks for an implicit DummyImplicit
, otherwise, it looks for an implicit Nothing
, which it obviously doesn't find. The resulting error message is quite lengthy and unhelpful, so I put in a custom error message. Note that this doesn't check if there are duplicate keys, but could be trivially modified to do so.
Unfortunately, I found myself having to explicitly annotate the key-value pairs' types at the use site:
//Errors because Tesla is missing
checkedMap(
(
Connector.CHAdeMO -> 1: (Connector.CHAdeMO.type, Int),
Connector.Mennekes -> 2: (Connector.Mennekes.type, Int),
Connector.CCS -> 3: (Connector.CCS.type, Int)
)
)
//Valid
checkedMap(
(
Connector.CHAdeMO -> 1: (Connector.CHAdeMO.type, Int),
Connector.Mennekes -> 2: (Connector.Mennekes.type, Int),
Connector.CCS -> 3: (Connector.CCS.type, Int),
Connector.Tesla -> 4: (Connector.Tesla.type, Int)
)
)
Source: stackoverflow.com
Related Query
- Compile time guarantee a map has a key for each enum case
- Scala reduce Sequence of Map to a Map with max value for each key
- Scala: extract each value for each key in a Map using for loop
- How to Define Custom partitioner for Spark RDDs of equally sized partition where each partition has equal number of elements?
- How to get the name of a case class field as a string/symbol at compile time using shapeless?
- Scala spark: how to use dataset for a case class with the schema has snake_case?
- What is the syntax for creating a Map in Scala that uses an enum as a key?
- Get value from a map for a column value as a key in spark dataframes
- Scala compile time check for location of constructor calls
- Decoding JSON values in circe where the key is not known at compile time
- Optimize Spark job that has to calculate each to each entry similarity and output top N similar items for each
- Deriving circe Codec for a sealed case class family where base trait has a (sealed) type member
- Can't add member to Map using dynamic mixin type for the key
- Which reasons exist for differences between compile time types and run time types?
- Using sealed trait as a key for a map
- How to check if some T is a case class at compile time in Scala?
- scala map get the value by key with key case insensitive
- Efficient map with case class as a key in Scala?
- Enforce upper bound for HList types at compile time
- How to write case class for enum column in Apache Spark Dataset?
- Scala replacement for a Java enum pattern I use all the time
- Map for each value of an array in a Spark Row
- Spark in Scala: How to avoid linear scan for searching a key in each partition?
- Validate Scala case class at compile time
- Why is it better to use case in function args for pattern matching in a map function
- Scala Dependency Injection for compile time with separate configuration
- Create Map holding a # of appearance for each List element
- Spark DataFrame/Dataset Find most common value for each key Efficient way
- How to read a csv file as Map for key value pair
- Why does not Map key type work when extracted from a case class?
More Query from same tag
- Stop the fs2-stream after a timeout
- Returning a concrete subtype from a generic function
- Spark: long delay between jobs
- Get a DateTime with an specific pattern with nscala-time
- how to create DataFrame if I can not use the SparkContext?
- Scala Xml Parsing: How to get Node with a attribute excluding subnode
- Create a new object extending a trait and using early definition syntax
- Set max-content for single spray-can pipeline
- Spark ML VectorAssembler returns strange output
- How to use Scala case class from another Object
- Error on return a Future[Boolean] from a for in Scala
- How to add Schema to a file from another File in spark Scala
- Not able to read parquet files in spark : java.lang.NoSuchMethodError: org.json4s.jackson.JsonMethods
- Can't convert JString to Double
- How can I make an instance of a Kleisli of an EitherT in Scala Cats?
- Custom mapping to nested case class structure in Slick (more than 22 columns)
- Importing Scala .class files into MapForce
- Avoid mutable variable in for loop
- How can I deserialize json to case classes on android?
- Slick 3 MySQL RAND() and update
- Replace mapValues for ListMap
- scala create dynamic factory based on trait inheritance
- Spark XML using xpath to produce key value pair
- How can we write to a Kafka topic and read from a kafka topic locally from IDE
- sc.textFile() gives Error while running command to get file permissions : java.io.IOException: (null) entry in command string
- Problem with implicit conversions and higher-order apply methods in Scala
- com.databricks.spark.csv version requirement
- "Modular" Scala guidelines
- Akka - GC overhead limit exceeded
- Apply several transformation functions to string