The answer is No. It last only for the duration of spark’s application run (till the time cluster is up in Databricks). As soon as the cluster ends the scope is lost and you can’t access on restart of the cluster.
How to solve above constraint and set a configuration property permanen...