When I was trying to copy several Databricks tables to SQL Server I could not find a straightforward way in the documentation to do this with Python. Problem is, Scala isn't supported on high concurrency Databricks clusters. However, it turns out to be quite simple, and the SQL Server table is even created for you.

# This is the SQL DB connection string
jdbcSqlURL="jdbc:sqlserver://<server name>.database.windows.net:1433;database=<db name>;user=<user name>;password=<password>"

# Build a dataframe from the Databricks table
df = spark.sql("SELECT * FROM <databricks table name>")

# Write dataframe to SQL Server
df.write.format("jdbc").options(url=jdbcSqlURL, dbtable="<sql server dest table>").mode("overwrite").save()