Hi,pushing to GitHub isn’t allowed, but clearing notebook outputs before internal version control is still important, you can automate this process by using a pre-commit hook or a script within your internal CI/CD pipeline (if one exists). Tools like...
Hi,Current approach reloads dim_df in every batch, which can be inefficient. To optimize, consider broadcasting dim_df if it's small or using a mapGroupsWithState function for stateful joins. Also, ensure that fact_df has sufficient watermarking to h...
Hi Pranav,Hope so you're doing good, this issue might be caused by a browser compatibility problem or account-specific settings. Try logging in using an incognito window or a different browser. If the problem persists, contact Databricks support dire...
Hi @DebIT2011,Hope so you're doing good, for incremental upserts and deletes from Cosmos DB, Databricks PySpark offers simplicity with unified management, especially for complex transformations and dependency handling. ADF may excel in GUI-based orch...
Ensure your vector_search_endpoint_name and vs_index_fullname match the deployment setup. Check model deployment logs for detailed errors and confirm your workspace's network settings allow access to Unity Catalog and model serving endpoints in a pri...