Please see https://win-vector.com/2017/05/26/managing-spark-data-handles-in-r/ for details. Note: one usually needs to alter the keys column which is just populated with all columns.

describe_tables(db, tablenames, ..., keyInspector = key_inspector_all_cols)

Arguments

db

database handle

tablenames

character, names of tables to describe.

...

force later arguments to bind by name.

keyInspector

function that determines preferred primary key set for tables.

Value

table describing the data.

Details

Please see vignette('DependencySorting', package = 'rquery') and vignette('joinController', package= 'rquery') for more details.

Examples


if (requireNamespace("DBI", quietly = TRUE) && requireNamespace("RSQLite", quietly = TRUE)) {
  my_db <- DBI::dbConnect(RSQLite::SQLite(), ":memory:")
  ex <- example_employee_date(my_db)
  print(describe_tables(my_db, ex$tableName,
                         keyInspector = key_inspector_sqlite))
  DBI::dbDisconnect(my_db)
}
#>         tableName isEmpty               indicatorColumn
#> 1 employeeanddate   FALSE table_employeeanddate_present
#> 2         revenue   FALSE         table_revenue_present
#> 3        activity   FALSE        table_activity_present
#> 4        orgtable   FALSE        table_orgtable_present
#>                      columns       keys
#> 1                   id, date           
#> 2            date, dept, rev date, dept
#> 3 eid, date, hours, location  eid, date
#> 4  eid, date, dept, location  eid, date
#>                                   colClass
#> 1                       character, integer
#> 2              integer, character, integer
#> 3   character, integer, integer, character
#> 4 character, integer, character, character