-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add: seeding #86
add: seeding #86
Conversation
Warning Rate Limit Exceeded@kazumatsudo has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 19 minutes and 8 seconds before requesting another review. How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. WalkthroughThe project has been updated to enforce better coding practices by preventing the use of Changes
Poem
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (invoked as PR comments)
Additionally, you can add CodeRabbit Configration File (
|
Codecov ReportAttention:
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #86 +/- ##
==========================================
- Coverage 75.07% 68.78% -6.30%
==========================================
Files 19 19
Lines 317 346 +29
Branches 34 39 +5
==========================================
Hits 238 238
- Misses 79 108 +29 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 3
Configuration used: CodeRabbit UI
Files selected for processing (5)
- build.sbt (2 hunks)
- src/main/scala/Main.scala (1 hunks)
- src/main/scala/utils/FileUtility.scala (1 hunks)
- src/main/scala/utils/JsonUtility.scala (1 hunks)
- src/test/scala/GenerateTestData.scala (1 hunks)
Additional comments: 7
build.sbt (2)
- 25-26: Adding
Wart.ToString
to thewartremoverErrors
list is a good practice to avoid using.toString
on types where it may not be well-defined, which can lead to runtime errors.- 52-52: The addition of the
scalacheck-faker
library for testing purposes is appropriate for generating fake data in tests. Ensure that the version8.0.2
is compatible with other project dependencies.src/main/scala/utils/JsonUtility.scala (2)
- 18-37: The
anyDecoder
implementation assumes that the JSON will contain only the types defined indomain.table.ddl.column.ColumnType
. This assumption may not hold true for all JSON inputs and could lead to runtime exceptions. It's important to ensure that the JSON being decoded strictly adheres to this expected format.- 72-80: The
readForUsingSpecificKeyListRequest
andwriteForUsingSpecificKeyListRequest
methods are added to handle serialization and deserialization ofUsingSpecificKeyListRequest
. Ensure that these methods are used consistently throughout the codebase in place of any previous JSON handling logic forUsingSpecificKeyListRequest
.src/main/scala/Main.scala (1)
- 60-60: The method
readForUsingSpecificKeyListRequest
is used to parse JSON intoUsingSpecificKeyListRequest
. Ensure that the JSON input is in the correct format expected by this method to avoid runtime errors.src/test/scala/GenerateTestData.scala (2)
- 22-91: The methods for generating vertices (
generateVertexAddress
,generateVertexCompany
,generateVertexPerson
,generateVertexPokemon
,generateVertexSchool
) use random data generation for properties. Ensure that the randomness does not affect the determinism of tests, or consider using fixed data where determinism is required.- 262-296: The
usingSpecificKeyListRequest
object is created and serialized to JSON without validation of the data. Ensure that the generated data meets the expected schema and constraints ofUsingSpecificKeyListRequest
.
def writeJson(filename: String, jsonString: String): Unit = { | ||
val config = ConfigFactory.load() | ||
val directory = new File(config.getString("sql_output_directory")) | ||
|
||
if (!directory.exists()) { | ||
directory.mkdirs() | ||
} | ||
|
||
Using.Manager { use => | ||
val fileOutputStream = | ||
use(new FileOutputStream(s"${directory.getPath}/$filename.json")) | ||
val writer = use(new OutputStreamWriter(fileOutputStream)) | ||
writer.write(jsonString) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The writeJson
method does not handle exceptions that may occur during file operations. Consider wrapping the file writing logic in a Try
block to handle potential IOExceptions
gracefully.
- Using.Manager { use =>
+ Try(Using.Manager { use =>
val fileOutputStream =
use(new FileOutputStream(s"${directory.getPath}/$filename.json"))
val writer = use(new OutputStreamWriter(fileOutputStream))
writer.write(jsonString)
- }
+ }).recover {
+ case e: IOException => // Handle exception, possibly log or rethrow
+ }
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
def writeJson(filename: String, jsonString: String): Unit = { | |
val config = ConfigFactory.load() | |
val directory = new File(config.getString("sql_output_directory")) | |
if (!directory.exists()) { | |
directory.mkdirs() | |
} | |
Using.Manager { use => | |
val fileOutputStream = | |
use(new FileOutputStream(s"${directory.getPath}/$filename.json")) | |
val writer = use(new OutputStreamWriter(fileOutputStream)) | |
writer.write(jsonString) | |
} | |
} | |
def writeJson(filename: String, jsonString: String): Unit = { | |
val config = ConfigFactory.load() | |
val directory = new File(config.getString("sql_output_directory")) | |
if (!directory.exists()) { | |
directory.mkdirs() | |
} | |
Try(Using.Manager { use => | |
val fileOutputStream = | |
use(new FileOutputStream(s"${directory.getPath}/$filename.json")) | |
val writer = use(new OutputStreamWriter(fileOutputStream)) | |
writer.write(jsonString) | |
}).recover { | |
case e: IOException => // Handle exception, possibly log or rethrow | |
} | |
} |
implicit private val anyEncoder: Encoder[Any] = new Encoder[Any] { | ||
override def apply(value: Any): Json = { | ||
// encode only the types defined in domain.table.ddl.column.ColumnType | ||
value match { | ||
case v: Boolean => v.asJson | ||
case v: Byte => v.asJson | ||
case v: Short => v.asJson | ||
case v: Int => v.asJson | ||
case v: Long => v.asJson | ||
case v: Float => v.asJson | ||
case v: Double => v.asJson | ||
case v: UUID => v.asJson | ||
case v: Char => v.asJson | ||
case v: String => v.asJson | ||
case v: RelationIdentifier => v.toString.asJson | ||
case v => v.toString.asJson // TODO: classify the type in detail |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The anyEncoder
uses a catch-all case to call .toString
on unknown types. This could lead to unexpected results if the type does not have a meaningful toString
implementation. It would be better to handle each expected type explicitly or fail for unsupported types.
- case v => v.toString.asJson // TODO: classify the type in detail
+ case _ => throw new UnsupportedOperationException("Unsupported type for JSON encoding")
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
implicit private val anyEncoder: Encoder[Any] = new Encoder[Any] { | |
override def apply(value: Any): Json = { | |
// encode only the types defined in domain.table.ddl.column.ColumnType | |
value match { | |
case v: Boolean => v.asJson | |
case v: Byte => v.asJson | |
case v: Short => v.asJson | |
case v: Int => v.asJson | |
case v: Long => v.asJson | |
case v: Float => v.asJson | |
case v: Double => v.asJson | |
case v: UUID => v.asJson | |
case v: Char => v.asJson | |
case v: String => v.asJson | |
case v: RelationIdentifier => v.toString.asJson | |
case v => v.toString.asJson // TODO: classify the type in detail | |
implicit private val anyEncoder: Encoder[Any] = new Encoder[Any] { | |
override def apply(value: Any): Json = { | |
// encode only the types defined in domain.table.ddl.column.ColumnType | |
value match { | |
case v: Boolean => v.asJson | |
case v: Byte => v.asJson | |
case v: Short => v.asJson | |
case v: Int => v.asJson | |
case v: Long => v.asJson | |
case v: Float => v.asJson | |
case v: Double => v.asJson | |
case v: UUID => v.asJson | |
case v: Char => v.asJson | |
case v: String => v.asJson | |
case v: RelationIdentifier => v.toString.asJson | |
case _ => throw new UnsupportedOperationException("Unsupported type for JSON encoding") |
private def connectEdgeBelongTo( | ||
g: GraphTraversalSource, | ||
from: Vertex, | ||
to: Vertex | ||
) = g | ||
.addE("belongTo") | ||
.from(from) | ||
.to(to) | ||
.property("belongToId", Random.nextInt(Int.MaxValue)) | ||
.next() | ||
|
||
private def connectEdgeBreedPokemonTo( | ||
g: GraphTraversalSource, | ||
from: Vertex, | ||
to: Vertex | ||
) = { | ||
val wantPokemonToLearn = Random.nextBoolean() | ||
|
||
val edge = g | ||
.addE("breedPokemon") | ||
.from(from) | ||
.to(to) | ||
.property("breedPokemonId", Random.nextInt(Int.MaxValue)) | ||
.property("caught", faker.pokemonLocation()) | ||
|
||
if (wantPokemonToLearn) { | ||
edge | ||
.property("wantPokemonToLearn", faker.pokemonMove()) | ||
} | ||
|
||
edge.next() | ||
} | ||
|
||
private def connectEdgeParent( | ||
g: GraphTraversalSource, | ||
from: Vertex, | ||
to: Vertex | ||
) = g | ||
.addE("parent") | ||
.from(from) | ||
.to(to) | ||
.property("parentId", Random.nextInt(Int.MaxValue)) | ||
.next() | ||
|
||
private def connectEdgeLive( | ||
g: GraphTraversalSource, | ||
from: Vertex, | ||
to: Vertex | ||
) = g | ||
.addE("live") | ||
.from(from) | ||
.to(to) | ||
.property("edgeId", Random.nextInt(Int.MaxValue)) | ||
.next() | ||
|
||
private def connectEdgeLocation( | ||
g: GraphTraversalSource, | ||
from: Vertex, | ||
to: Vertex | ||
) = g | ||
.addE("location") | ||
.from(from) | ||
.to(to) | ||
.property("locationId", Random.nextInt(Int.MaxValue)) | ||
.next() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The methods for connecting edges (connectEdgeBelongTo
, connectEdgeBreedPokemonTo
, connectEdgeParent
, connectEdgeLive
, connectEdgeLocation
) do not handle potential exceptions that may occur during graph operations. Consider adding error handling to manage such cases gracefully.
- g.addE("belongTo").from(from).to(to).property("belongToId", Random.nextInt(Int.MaxValue)).next()
+ Try(g.addE("belongTo").from(from).to(to).property("belongToId", Random.nextInt(Int.MaxValue)).next())
+ .recover {
+ case e: Exception => // Handle exception, possibly log or rethrow
+ }
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation.
private def connectEdgeBelongTo( | |
g: GraphTraversalSource, | |
from: Vertex, | |
to: Vertex | |
) = g | |
.addE("belongTo") | |
.from(from) | |
.to(to) | |
.property("belongToId", Random.nextInt(Int.MaxValue)) | |
.next() | |
private def connectEdgeBreedPokemonTo( | |
g: GraphTraversalSource, | |
from: Vertex, | |
to: Vertex | |
) = { | |
val wantPokemonToLearn = Random.nextBoolean() | |
val edge = g | |
.addE("breedPokemon") | |
.from(from) | |
.to(to) | |
.property("breedPokemonId", Random.nextInt(Int.MaxValue)) | |
.property("caught", faker.pokemonLocation()) | |
if (wantPokemonToLearn) { | |
edge | |
.property("wantPokemonToLearn", faker.pokemonMove()) | |
} | |
edge.next() | |
} | |
private def connectEdgeParent( | |
g: GraphTraversalSource, | |
from: Vertex, | |
to: Vertex | |
) = g | |
.addE("parent") | |
.from(from) | |
.to(to) | |
.property("parentId", Random.nextInt(Int.MaxValue)) | |
.next() | |
private def connectEdgeLive( | |
g: GraphTraversalSource, | |
from: Vertex, | |
to: Vertex | |
) = g | |
.addE("live") | |
.from(from) | |
.to(to) | |
.property("edgeId", Random.nextInt(Int.MaxValue)) | |
.next() | |
private def connectEdgeLocation( | |
g: GraphTraversalSource, | |
from: Vertex, | |
to: Vertex | |
) = g | |
.addE("location") | |
.from(from) | |
.to(to) | |
.property("locationId", Random.nextInt(Int.MaxValue)) | |
.next() | |
private def connectEdgeBelongTo( | |
g: GraphTraversalSource, | |
from: Vertex, | |
to: Vertex | |
) = Try(g | |
.addE("belongTo") | |
.from(from) | |
.to(to) | |
.property("belongToId", Random.nextInt(Int.MaxValue)) | |
.next()) | |
.recover { | |
case e: Exception => // Handle exception, possibly log or rethrow | |
} | |
private def connectEdgeBreedPokemonTo( | |
g: GraphTraversalSource, | |
from: Vertex, | |
to: Vertex | |
) = { | |
val wantPokemonToLearn = Random.nextBoolean() | |
val edge = Try(g | |
.addE("breedPokemon") | |
.from(from) | |
.to(to) | |
.property("breedPokemonId", Random.nextInt(Int.MaxValue)) | |
.property("caught", faker.pokemonLocation())) | |
if (wantPokemonToLearn) { | |
edge | |
.property("wantPokemonToLearn", faker.pokemonMove()) | |
} | |
edge.next() | |
} | |
private def connectEdgeParent( | |
g: GraphTraversalSource, | |
from: Vertex, | |
to: Vertex | |
) = Try(g | |
.addE("parent") | |
.from(from) | |
.to(to) | |
.property("parentId", Random.nextInt(Int.MaxValue)) | |
.next()) | |
.recover { | |
case e: Exception => // Handle exception, possibly log or rethrow | |
} | |
private def connectEdgeLive( | |
g: GraphTraversalSource, | |
from: Vertex, | |
to: Vertex | |
) = Try(g | |
.addE("live") | |
.from(from) | |
.to(to) | |
.property("edgeId", Random.nextInt(Int.MaxValue)) | |
.next()) | |
.recover { | |
case e: Exception => // Handle exception, possibly log or rethrow | |
} | |
private def connectEdgeLocation( | |
g: GraphTraversalSource, | |
from: Vertex, | |
to: Vertex | |
) = Try(g | |
.addE("location") | |
.from(from) | |
.to(to) | |
.property("locationId", Random.nextInt(Int.MaxValue)) | |
.next()) | |
.recover { | |
case e: Exception => // Handle exception, possibly log or rethrow | |
} |
# - USING_SPECIFIC_KEY_LIST_OUTPUT_DIRECTORY # - USING_SPECIFIC_KEY_LIST_FILENAME
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review Status
Actionable comments generated: 3
Configuration used: CodeRabbit UI
Files selected for processing (6)
- .gitignore (1 hunks)
- build.sbt (2 hunks)
- src/main/resources/application.conf (1 hunks)
- src/main/scala/Main.scala (5 hunks)
- src/main/scala/utils/FileUtility.scala (1 hunks)
- src/test/scala/GenerateTestData.scala (1 hunks)
Files skipped from review due to trivial changes (1)
- .gitignore
Files skipped from review as they are similar to previous changes (3)
- build.sbt
- src/main/scala/utils/FileUtility.scala
- src/test/scala/GenerateTestData.scala
Additional comments: 7
src/main/resources/application.conf (3)
- 4-4: The configuration
analysis_method_using_specific_key_list_filepath
has been changed to support environment variable substitution.- 9-9: The configuration
using_specific_key_list_output_directory
has been changed to support environment variable substitution.- 11-11: The configuration
using_specific_key_list_filename
has been changed to support environment variable substitution.src/main/scala/Main.scala (4)
- 7-7: An import statement for
scala.util.control.NonFatal
has been added, which is used for exception handling.- 61-61: The method
parseForUsingSpecificKeyListRequest
has been replaced withreadForUsingSpecificKeyListRequest
. This change implies a refactoring of JSON parsing logic.- 93-93: The
writeSql
method is used to write the vertices DML to a file. Similar to the previous comment, ensure that the file path and permissions are correctly handled.Verification successful
The verification script has confirmed that the current directory (.) is writable. Since the
dml_vertex
path is not an absolute path and no base path is provided, the script assumes it to be relative to the current working directory, which is the project root in the context of the script execution environment. Therefore, the directory for SQL DML vertices, as specified by the default value inapplication.conf
, is writable.* 119-122: Exception handling has been added using `NonFatal`. This is a good practice to catch and log exceptions that are not fatal, allowing the application to exit gracefully.Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify that the directory for SQL DML vertices exists and is writable. if [ -w "$(dirname "$(config getString "sql_dml_vertex")")" ]; then echo "Directory for SQL DML vertices is writable." else echo "Directory for SQL DML vertices is not writable or does not exist." exit 1 fiLength of output: 235
Script:
#!/bin/bash # Attempt to find the configuration file that might contain the "sql_dml_vertex" setting. # We will use 'rg' to search for the string "sql_dml_vertex" within the repository. rg "sql_dml_vertex"Length of output: 225
Script:
#!/bin/bash # Check if the directory for the default "dml_vertex" path is writable. # Assuming "dml_vertex" is a relative path from the project root. if [ -w "$(dirname "dml_vertex")" ]; then echo "Directory for SQL DML vertices is writable." else echo "Directory for SQL DML vertices is not writable or does not exist." exit 1 fiLength of output: 133
@@ -78,7 +79,7 @@ object Main extends StrictLogging { | |||
|
|||
/* output SQL */ | |||
verticesDdlResult.foreach { vertexDdl => | |||
FileUtility.outputSql( | |||
FileUtility.writeSql( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tip
Codebase Verification
The verification process has confirmed that the default directory for SQL DDL vertices, which is "sql" according to the application.conf
file, is not writable or does not exist in the current environment. This could potentially lead to file system-related exceptions when the application attempts to write to this directory.
Analysis chain
The writeSql
method is used to write the vertices DDL to a file. Ensure that the file path and permissions are correctly handled to prevent any file system-related exceptions.
Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify that the directory for SQL DDL vertices exists and is writable.
if [ -w "$(dirname "$(config getString "sql_ddl_vertex")")" ]; then
echo "Directory for SQL DDL vertices is writable."
else
echo "Directory for SQL DDL vertices is not writable or does not exist."
exit 1
fi
Length of output: 235
Script:
#!/bin/bash
# Attempt to find where the configuration for "sql_ddl_vertex" might be set within the Scala files.
rg "sql_ddl_vertex"
Length of output: 225
Script:
#!/bin/bash
# Check the content of application.conf to confirm the default value for "sql_ddl_vertex"
cat src/main/resources/application.conf
Length of output: 1659
Script:
#!/bin/bash
# Check if the default SQL output directory is writable or if an environment variable overrides it.
if [ -n "$SQL_DDL_VERTEX" ]; then
# If the environment variable is set, check that directory
if [ -w "$(dirname "$SQL_DDL_VERTEX")" ]; then
echo "Environment variable directory for SQL DDL vertices is writable."
else
echo "Environment variable directory for SQL DDL vertices is not writable or does not exist."
exit 1
fi
else
# Check the default directory
if [ -w "sql" ]; then
echo "Default directory for SQL DDL vertices is writable."
else
echo "Default directory for SQL DDL vertices is not writable or does not exist."
exit 1
fi
fi
Length of output: 198
@@ -100,20 +101,25 @@ object Main extends StrictLogging { | |||
) | |||
|
|||
edgesDdlResult.foreach { edgesDdlResult => | |||
FileUtility.outputSql( | |||
FileUtility.writeSql( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tip
Codebase Verification
The verification process has determined that the directory sql/ddl_edge
, which is the default location for SQL DDL edges as specified in the application.conf
, is not writable or does not exist. This could potentially lead to file system-related exceptions when the application attempts to write to this directory.
Therefore, it is recommended to ensure that the directory exists and has the appropriate permissions set to allow writing by the application.
Analysis chain
The writeSql
method is used to write the edges DDL to a file. Ensure that the file path and permissions are correctly handled to prevent any file system-related exceptions.
Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify that the directory for SQL DDL edges exists and is writable.
if [ -w "$(dirname "$(config getString "sql_ddl_edge")")" ]; then
echo "Directory for SQL DDL edges is writable."
else
echo "Directory for SQL DDL edges is not writable or does not exist."
exit 1
fi
Length of output: 227
Script:
#!/bin/bash
# Attempt to find the configuration file or class that might contain the "sql_ddl_edge" setting.
# We will use 'rg' to search for the string "sql_ddl_edge" across the codebase.
rg "sql_ddl_edge"
Length of output: 213
Script:
#!/bin/bash
# Let's check the contents of the application.conf file to see the context of "sql_ddl_edge"
cat src/main/resources/application.conf
Length of output: 1659
Script:
#!/bin/bash
# Construct the full path for SQL DDL edges by combining sql_output_directory and sql_ddl_edge
# and then check if that directory is writable.
# Default values from application.conf
sql_output_directory="sql"
sql_ddl_edge="ddl_edge"
# Construct the full path
full_path="${sql_output_directory}/${sql_ddl_edge}"
# Check if the directory is writable
if [ -w "${full_path}" ]; then
echo "Directory '${full_path}' for SQL DDL edges is writable."
else
echo "Directory '${full_path}' for SQL DDL edges is not writable or does not exist."
exit 1
fi
Length of output: 281
config.getString("sql_ddl_edge"), | ||
edgesDdlResult.toSqlSentence | ||
) | ||
} | ||
displayOperationResult("generate edges DDL", edgesDdlResult.nonEmpty) | ||
|
||
edgesDmlResult.foreach { edgesDmlResult => | ||
FileUtility.outputSql( | ||
FileUtility.writeSql( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tip
Codebase Verification
The verification process has determined that the directory specified for SQL DML edges, which is sql
by default, is not writable or does not exist in the current environment. This information should be communicated to the user to address the potential issue with file path and permissions as noted in the review comment.
The directory intended for SQL DML edges, as per the application's default configuration, is not writable or does not exist. Please ensure that the directory exists and has the appropriate permissions for the application to write files.
Analysis chain
The writeSql
method is used to write the edges DML to a file. Ensure that the file path and permissions are correctly handled.
Scripts Executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Verify that the directory for SQL DML edges exists and is writable.
if [ -w "$(dirname "$(config getString "sql_dml_edge")")" ]; then
echo "Directory for SQL DML edges is writable."
else
echo "Directory for SQL DML edges is not writable or does not exist."
exit 1
fi
Length of output: 227
Script:
#!/bin/bash
# Attempt to find the configuration file and extract the path for "sql_dml_edge"
# First, find any files that might be configuration files
config_files=$(fd 'application\.conf$')
# Now, try to extract the value for "sql_dml_edge" from these files
for file in $config_files; do
# Assuming the configuration might be in HOCON format which is commonly used in Scala applications
# We grep for the key and then use awk to print out the value
path=$(grep 'sql_dml_edge' $file | awk -F':' '{print $2}' | xargs)
if [ ! -z "$path" ]; then
echo "Found path for sql_dml_edge: $path"
# Check if the directory of the found path is writable
if [ -w "$(dirname "$path")" ]; then
echo "Directory for SQL DML edges is writable."
else
echo "Directory for SQL DML edges is not writable or does not exist."
fi
fi
done
Length of output: 223
Script:
#!/bin/bash
# Inspect the contents of the application.conf file to understand its structure
# and to determine the correct key for the SQL DML edge path.
cat src/main/resources/application.conf
Length of output: 1659
Script:
#!/bin/bash
# Check if the SQL_DML_EDGE environment variable is set and use its value if available
# Otherwise, use the default value from the configuration file
sql_dml_edge_path=${SQL_DML_EDGE:-dml_edge}
# Verify the writability of the directory specified by the sql_output_directory key
sql_output_dir=${SQL_OUTPUT_DIRECTORY:-sql}
# Check if the directory is writable
if [ -w "$sql_output_dir" ]; then
echo "Directory for SQL DML edges is writable: $sql_output_dir"
else
echo "Directory for SQL DML edges is not writable or does not exist: $sql_output_dir"
fi
Length of output: 209
#85
Summary by CodeRabbit
New Features
Enhancements
Library Updates