Command Line Operation

Batch operation is possible with installed applications and from Python source code.
Run as follows:
  • Python source code:
python arelleCmdLine.py arguments
  • Windows x64 or 32 bit:
"c:\Program Files\arelleCmdLine.exe" arguments
  • MacOS App:
/Applications/Arelle.app/contents/MacOS/arelleCmdLine arguments
  • Linux/Unix:
./arelleCmdLine arguments
(for Sparc/Solaris or otherwise as needed may need to set .so  loading path:
LD_LIBRARY_PATH=`pwd` ./arelleCmdLine arguments  ...)
where arguments are:
  • -h or –help for the arguments supported by the running version of the program
  • -a or –about for the compile date version, copyright, and license information
Specifying input files
  • -f or –file followed by file name (entry point to be loaded, instance, schema, linkbase, inline XBRL instance, testcase file, or testcase index file).  File may be local or a URL to a web-located file.  (If packages are loaded and enabled, remapping of URLs into package files is effective.)
  • –username followed by username if needed (with password) for web file retrieval
  • –password if needed (with username) for web file retrieval
  • -i or –import followed by a list of files to import to the DTS, such as additional formula or label linkbases.  Multiple file names are separated by the “|” character.
  • -d or –diff followed by file name for a second DTS to be compared for versioning report generation.  In this case the -f file is the fromDTS and the -d file is the toDTS.
    • -r or –report, followed by the file name into which to save a versioning report (companion to -d or –diff).
Validations
  • -v or –validate: perform validation according to the file type.  If an XBRL file, it is validated according to XBRL validation 2.1, calculation linkbase validation if either –calcDecimals or –calcPrecision are specified, and disclosure system validation (as selected). If a test suite or testcase, the test case variations are individually so validated.  If formulae are present they will be validated and run unless –formula=none is specified.
  • –calcDecimals: select calculation linkbase validation, inferring decimals (current recommendation)
    • –calcPrecision: select calculation linkbase validation, inferring precision instead of decimal (deprecated, prior XBRL 2.1 recommendation)
  • –disclosureSystem followed by system code (such as ‘efm’), for disclosure system rules validation.  Enter –disclosureSystem=help for a list of names, or help-verbose for a list of names and descriptions.  (Replaces –efm and –gfm, which are deprecated.)
  • –utr: select validation with respect to the Unit Types Registry.  (Some disclosure systems validations activate utr validation or provide specific utr files, so this entry would not be needed, such as U.S. SEC.)
    • –utrUrl followed by URL overrides the systems Unit Type Registry (with a URL or local file path) (not needed with a –disclosureSystem argument).
  • –infoset specifies validation with respect to test case infosets (which are part of some test case suites).
  • –labelLang provides an xml:lang language code to override the system language settings for labels output by the following file options, e.g., –labelLang=en-US
  • –labelRole overrides the standard XBRL 2.1 label role for the labels output by the following file options, e.g., –labelRole=http://www.xbrl.org/2003/role/terseLabel
The following file options determine the type of file saved by the extension of the file name, the extension may be any of: “.html”, “.htm”, “.xhtml”, “.json” or “.csv”.  The prior options named –csv* are deprecated but still work.
  • –DTS followed by file name: write the DTS tree into the specified file
  • –facts followed by file name: write the fact list into the specified file
    • –factListCols followed by a list of columns for a –facts list, the columns are comma separated and can be any of these: Label, Name, contextRef, unitRef, Dec, Prec, Lang, Value, EntityScheme, EntityIdentifier, Period, or Dimensions. The default columns list is “Label contextRef unitRef Dec Prec Lang Value“.
  • –factTable followed by file name: write the fact table into the specified file
  • –concepts followed by file name: write the concepts (elements) list into the specified file
  • –pre followed by file name: store the presentation linkbase in specified file
  • –cal followed by file name: store the calculation linkbase in specified file
  • –dim followed by file name: store the dimensions in definition linkbase in specified file
  • –formulae followed by file name: store the formula-related resources tree in specified file
  • –viewArcrole followed by an arcrole: store relationships for the arcrole in the file specified by –viewFile
    • –viewFile specifies the file to store –viewArcrole relationships
  • –roleTypes followed by file name: store defined role types in specified file
  • –arcroleTypes followed by file name: store defined arcrole types in specified file
  • –testReport followed by file name: write a test report of validation (of test cases) in specified file
    • –testReportCols followed by a list of columns for a –testReport, the columns are comma separated and can be any of these: Index, Testcase, ID, Name, Reference, ReadMeFirst, Status, Expected, Actual“.
  • –rssReport followed by file name: write a test report of RSS feed processing in specified file
    • –rssReportCols followed by a list of columns for a –testReport, the columns are comma separated and can be any of these: Company Name, Accession Number, Form, Filing Date, CIK, Status, Period, Yr End, Results”.
Logging options:
  • logFile: write log messages into file, otherwise they go to standard output. If file ends in .xml it is xml-formatted, otherwise it is text.  XML log files contain significant additional details not provided in the text files.  The file names logToPrint, logToStdOut.xml and logToStdError provide output on standard output and standard error streams.
    • –logFormat: specify the Python logger format for the text description of messages, if absent the default is [%(messageCode)s] %(message)s – %(file)s.
    • –logLevel: Minimum level for messages capture, otherwise the message is ignored.  Current order of levels are debug, info, info-semantic, warning, warning-semantic, warning, assertion-satisfied, inconsistency, error-semantic, assertion-not-satisfied, and error.
    • –logLevelFilter: Regular expression filter for logLevel. (E.g., to not match *-semantic levels, –logLevelFilter=(?!^.*-semantic$)(.+).
    • –logCodeFilter: Regular expression filter for log message code
  • –collectProfileStats: Collect profile statistics, such as timing of validation activities and formulae.
Validation and formula options:
  • –parameters specifies parameters for formula and validation (name=value[,name=value]).
    • –parameterSeparator: Specify parameters separator string (if other than comma, may be a multi-character separator).
  • –formula: Specify formula action, If this option is not specified, -v or –validate will validate and run formulas if present:
    • validate – validate only, without running;
    • run - validate and run, or
    • none - prevent formula validation or running when also specifying -v or –validate.
  • –formulaParamExprResult: formula tracing option
  • –formulaParamInputValue: formula tracing option
  • –formulaCallExprSource: formula tracing option
  • –formulaCallExprCode: formula tracing option
  • –formulaCallExprEval: formula tracing option
  • –formulaCallExprResult: formula tracing option
  • –formulaVarSetExprEval: formula tracing option
  • –formulaVarSetExprResult: formula tracing option
  • –formulaVarSetTiming: show times of variable set evaluation
  • –formulaAsserResultCounts: formula tracing option
  • –formulaFormulaRules: formula tracing option
  • –formulaVarsOrder: formula tracing option
  • –formulaVarExpressionSource: formula tracing option
  • –formulaVarExpressionCode: formula tracing option
  • –formulaVarExpressionEvaluation: formula tracing option
  • –formulaVarExpressionResult: formula tracing option
  • –formulaVarFiltersResult: formula tracing option
  • –formulaVarFiltersWinnowing: shows winnowing process of successive application of formulas, helpful in debugging to find out which filter affects the facts that pass a filtering step
  • –formulaVarFiltersResult: shows result of filters
Configuration options
  • –uiLang: Language for user interface (override system settings, such as program messages).  Does not save setting.  Does not affect labels language of above reports (see –labelLang).
  • –proxy: Modify and re-save proxy settings configuration. Enter system to use system proxy setting, none to use no proxy, a specific prlxy in the format http://[user[:password]@]host[:port] (e.g., http://192.168.1.253, http://example.com:8080, http://joe:secret@example.com:8080), or show to show current (saved)setting.
  • –internetConnectivity: Specify internet connectivity: online or offline.
  • –internetTimeout: Specify internet connection timeout in seconds (0 (zero) means unlimited).
  • –internetRecheck:  Specify frequency of rechecking web file’s time stamps to cached file copy and determine if newer file to be reloaded: weekly (default), daily or never.
  • –internetLogDownloads:  Log an info message for downloads to the web cache.
  • –xdgConfigHome: Specify non-standard location for configuration and cache files (overrides environment parameter XDG_CONFIG_HOME, if also provided).
  • –plugins: modify plug-in configuration:
    • Re-save unless temp is in the module list.
    • Enter show to show current plug-in configuration.
    • Commands show, and module urls are ‘|’ separated:
      +url to add plug-in by its url or filename,
      ~name to reload a plug-in by its name,
      -name to remove a plug-in by its name

      • relative URLs are relative to installation plug-in directory, (e.g.,
        ‘+http://arelle.org/files/hello_web.py’,
        ‘+C:\Program Files\Arelle\examples\plugin\hello_dolly.py’ to load, or +../examples/plugin/hello_dolly.py for relative use of examples directory,
        ~Hello Dolly to reload, -Hello Dolly to remove).
    • If + is omitted from .py file nothing is saved (same as temp).
    • Packaged plug-in urls are their directory’s url.
  • –packages:  Modify taxonomy packages configuration.
    • Re-save unless temp is in the module list.
    • Enter show to show current packages configuration.
    • Commands show, and module urls are ‘|’ separated:
      +url to add package by its url or filename,
      ~name to reload package by its name,
      -name to remove a package by its name
    • URLs are full absolute paths.
    • If + is omitted from package file nothing is saved (same as temp).
  • –abortOnMajorError: Abort process on major error, such as when load is unable to find an entry or discovered file.
  • –webserver: Start web server on host:port[:server] for REST and web access, e.g., –webserver locahost:8080, or specify nondefault a server name, such as cherrypy, –webserver locahost:8080:cherrypy.
  • (It is possible to specify options to be defaults for the web server, such as disclosureSystem and validations, but not including file names.)

Plug in Options

Many of the plug-ins add command line (in addition to GUI menu) options.  For additional detail see the plug-in documentation.  A summary of standard plug options is provided here:

inlineDocumentSet.py

–save-instance: provides a file name into which to save an XBRL instance document derived from a loaded manifest (Japan FSA) of a document set of inline documents, or loaded single inline document.  (Currently supports one target instance document only.)

loadFromExcel.py

(no command line options, but the -f and –file options can specify an Excel document which represents a DTS.)

profileCmdLine.py

–save-profiler-report: specifies file into which to save a Python profiler report of the timings of Arelle Python methods and routines for the operations performed by the other command line functions.  Useful to determine if a long running function is spending excessive time in a function that needs implementation attention.

saveDTS.py (produces TaxonomyPackage)

–package-DTS: specifies a file into which to save a Taxonomy Package representing the loaded DTS

saveHtmlEBAtables.py (EBA Table Sets)

–save-EBA-tablesets: specifies an index file (usually index.html) to represent an html page that holds a table of contents to the EBA tables generated for all loaded EBA Table Linkbase tables in the DTS.

saveLoadableExcel.py

–save-loadable-excel:  Saves an Excel workbook representing the DTS (which can be subsequently loaded by loadFromExcel.py plugin.

saveSKOS.py

–save-skos: saves a SKOS (OMG standard OWL file) representing a semantic view of a DTS based on presentation linkbase entries (for US-GAAP, IFRS, and EDInet style taxonomies).

validateSecTagging.py

–save-sec-tag-dts-matches: adds to disclosure system SEC validation checks for SEC non negatives based on an algorithmic and weighted approach
sphinx (subpackage, features including generating formula linkbase have not been sufficiently tested)
–import-sphinx: Import sphinx files to the DTS for validation.  Multiple file names are separated by a ‘|’ character.
–generate-sphinx-formula-linkbase:  Generate an XBRL formula linkbase from sphinx files.  Multiple file names are separated by a ‘|’ character. Files may be xrb archives, xsr source files, or directories of same.
–generated-sphinx-formulas-directory:  Generated XBRL formula linkbases directory.  (If absent, formula linkbases save in sphinx files directory.)

xbrlDB

–store-to-XBRL-DB:  Store into XBRL DB.  Provides connection string: host,[port], user, password, database, [timeout], technology:
postgres – XBRL-US Postgres SQL
mssqlSemantic – Semantic MSSQL SQL
mysqlSemantic – Semantic MySQL SQL
orclSemantic – Semantic Oracle SQL
pgSemantic – Semantic Postgres SQL
rexster – Rexter (Titan Cassandra)
rdfDB – RDF (Turtle, NanoSparqlServer)
json – JSON (JSON, MongoDB)
For rdfDB if hostname is rdfTurtleFile or rdfXmlFile, then database specifies a local file system pathname to store a file.  For json, if hostname is jsonFile then database specifies a local file system path name to store into a file.
Examples:
Store into postgres database using XBRL-US schema:
arelleCmdLine -f c:\temp\test.rss -v –disclosureSystem efm-pragmatic-all-years –store-to-XBRL-DB “myhost.com,8084,pgUserId,pgPassword,test_db,90,postgres”
Store into local MSSQL database using SQLEXPRESS instance:
arelleCmdLine -f c:\temp\test.rss -v –disclosureSystem efm-pragmatic-all-years –store-to-XBRL-DB “localhost\SQLEXPRESS,,sqlLogin,sqlPassword,,90,mssqlSemantic”

Disclosure System Selections

The disclosure systems are documented in the config directory file disclosuresystems.xml.  These are current entries:
  • fsa: Japan FSA
Japan FSA example entry
Default language Japanese
EDInet identifier patterns
Allowed references TBD
  • ifrs: IFRS-Example
IFRS Example
Default language English
EDInet identifier patterns
Allowed references TBD
  • sbr-nl: SBR-NL
SBR Netherlands
Default language Dutch
EDInet identifier patterns
Allowed references TBD
  • hmrc: UK HMRC (Joint Filing Validation Checks)
UK HMRC Joint Filing Common Validation Checks
Default language en-UK
Disallowed references are processed
  • efm-strict: US SEC (Edgar Filing Manual, Strict)
US SEC Edgar Filing Manual, 2013
Default language en-US (en allowed in some cases per EFM)
CIK identifier patterns
Allowed references http://www.sec.gov/info/edgar/edgartaxonomies.shtml
Disallowed references are processed
Includes content (semantic) tests
  • efm-pragmatic: US SEC (Edgar Filing Manual, Pragmatic)
US SEC Edgar Filing Manual, 2013
Default language en-US (en allowed in some cases per EFM)
CIK identifier patterns
Allowed references http://www.sec.gov/info/edgar/edgartaxonomies.shtml
Disallowed references are blocked and not loaded (same as SEC production s
ystem)
No content (semantic) tests are reported
  • efm-strict-all-years: US SEC (Edgar Filing Manual, Strict, all years)
US SEC Edgar Filing Manual, 2012
Default language en-US (en allowed in some cases per EFM)
CIK identifier patterns
Allowed references http://www.sec.gov/info/edgar/edgartaxonomies.shtml
Disallowed references are processed
Includes content (semantic) tests
  • efm-pragmatic-all-years: US SEC (Edgar Filing Manual, Pragmatic, all years)
US SEC Edgar Filing Manual, 2012
Default language en-US (en allowed in some cases per EFM)
CIK identifier patterns
Allowed references http://www.sec.gov/info/edgar/edgartaxonomies.shtml
Disallowed references are blocked and not loaded (same as SEC production system)
No content (semantic) tests are reported
  • gfm-us: US SEC (Global Filing Manual)
US SEC interpretation of Global Filing Manual
Default language en
CIK identifier patterns
Allowed references http://www.sec.gov/info/edgar/edgartaxonomies.shtml
Disallowed references are processed

Batch Script Files

A set of batch script files are provided for convenience of running repetitive things.  Contributors are encouraged to improve them and make them suitable for Mac and Linux (originally just Windows):
  • runEFMTests.bat:  Runs all of the EFM tests in the indicated source tree of test files.  Outputs log with error messages, and an Excel-compatible file (.csv for the moment) with the same appearance as the tests tree grid GUI view.
  • runUS-GFMTests.bat:  Same, but runs tests in the Global Filer Manual, instead of Edgar, mode.  Just for experimentation at this moment.
  • runXDTTests.bat:  Runs the XDT conformance suite, and outputs messages log and Excel report.
  • runGenerateVersioningTestcases.bat:  Generates a versioning creation/consumption test suite from the excel index file, such as 1000-2000-index.xml.
  • runVersioningConsumptionTests.bat:  Executes versioning consumption testcases from the index file noted in the settings.
 

51 Responses to Command Line Operation

  1. Harold Kinds says:

    I have tested the cmdline:

    - loading the msft instance works OK
    sw: arellecmdline -f msft\msft-20110630.xml
    [info] loaded in 4,20 secs at 2011-08-17T13:19:31

    - validating the msft instance works OK
    sw: arellecmdline -f msft\msft-20110630.xml -v
    [info] loaded in 4,19 secs at 2011-08-17T13:19:47
    [info] validated in 5,22 secs

    - exporting the DTS to a csv works OK
    sw: arellecmdline -f msft\msft-20110630.xml –csvDTS dts.csv
    [info] loaded in 4,19 secs at 2011-08-17T13:20:56

    - but exporting the pre (same for Dim and Cal): not OK!
    sw: arellecmdline -f msft\msft-20110630.xml –csvPre pre.csv
    Traceback (most recent call last):
    File “c:\python32x86\lib\site-packages\cx_Freeze\initscripts\Console3.py”, line 27, in
    File “arelleCmdLine.py”, line 11, in
    File “C:\Users\Herm Fischer\Documents\mvsl\projects\Arelle\ArelleProject\src\arelle\CntlrCmdLine.py”, line 124, in main
    File “C:\Users\Herm Fischer\Documents\mvsl\projects\Arelle\ArelleProject\src\arelle\CntlrCmdLine.py”, line 233, in run
    File “C:\Users\Herm Fischer\Documents\mvsl\projects\Arelle\ArelleProject\src\arelle\ViewCsvRelationshipSet.py”, line 12, in viewRelationshipSet
    TypeError: __init__() takes exactly 5 arguments (4 given)
    [info] loaded in 4,19 secs at 2011-08-17T13:21:56
    sw:

    • admin says:

      Thank you for testing this, the -csvPre command line option is fixed

      • 8maki says:

        Have you already fixed this issue?
        I have the same error on the latest code.


        python3 -m arelle.CntlrCmdLine -f ../files/tdnet-qcedjpfr-24320-2011-06-30-01-2011-07-29.xbrl --csvPre ../pre.csv
        [info] loaded in 15.84 secs at 2011-08-20T14:21:37
        Traceback (most recent call last):
        File "/usr/local/Cellar/python3/3.1.3/lib/python3.1/runpy.py", line 128, in _run_module_as_main
        "__main__", fname, loader, pkg_name)
        File "/usr/local/Cellar/python3/3.1.3/lib/python3.1/runpy.py", line 34, in _run_code
        exec(code, run_globals)
        File "/Users/yamakiwataru/Workspace/taiga/xbrl/Arelle/arelle/CntlrCmdLine.py", line 248, in
        main()
        File "/Users/yamakiwataru/Workspace/taiga/xbrl/Arelle/arelle/CntlrCmdLine.py", line 115, in main
        CntlrCmdLine().run(options)
        File "/Users/yamakiwataru/Workspace/taiga/xbrl/Arelle/arelle/CntlrCmdLine.py", line 222, in run
        ViewCsvRelationshipSet.viewRelationshipSet(modelXbrl, options.csvPre, "Presentation", "http://www.xbrl.org/2003/arcrole/parent-child")
        File "arelle/ViewCsvRelationshipSet.py", line 12, in viewRelationshipSet
        view = ViewRelationshipSet(modelXbrl, csvfile, header)
        TypeError: __init__() takes exactly 5 positional arguments (4 given)

        • admin says:

          I see you are running from source code, I believe this fix (in gitHub) should have fixed that: “fix lxml conversion issues in CSV Relationships output”, 2011-08-17 15:55:07

          modelXbrl.modelManager.showStatus(_(“viewing relationships {0}”).format(os.path.basename(arcrole)))
          - view = ViewRelationshipSet(modelXbrl, csvfile, header)
          + view = ViewRelationshipSet(modelXbrl, csvfile, header, lang)
          view.view(arcrole, linkrole, linkqname, arcqname)

          Please send an e-mail to support@arelle.org if I can help in more detail to debug this issue! Thanks.

      • Harold Kinds says:

        Thanks, works ok now.
        Don’t forget to sync the documentation on this page with the -h contents!

  2. Don says:

    Missing --csvFacts=CSVFACTLIST in this doc.

  3. Ignacio Santos says:

    I can get in a csv file the dimensions, the primary items. Is it possible to get the compnents of the formulas, parameters, filters, inputs, and so on, in a csv file?

    Regards,

    Ignacio Santos

    • admin says:

      It would be quick to add the csv equivalent of the formula GUI treeview pane, would that be helpful for your needs?

      (And then both can be improved with more details about the formula LB resources and arc parameters, as needed.)

    • admin says:

      –csvFormulae parameter added (with corresponding csv output equivalent to GUI formula tree view)

      • Ignacio Santos says:

        Example of formulas in bach (windows 7):

        c:

        cd\

        cd \Program Files\Arelle

        echo on

        arelleCmdLine.exe -f “C:\Taxonimy\FINREP_2008\es-be-FINREP_Informes\IS1_6610.xbrl” –htmlFormulae=”C:\Taxonomy\XBRLFormula6610.CSV”

  4. FundXJim says:

    In the GUI, you can see a fact table with concepts and date dimensions and values. You can copy & paste this table to excel. Is it possible to output this same view to a csv file via the command line?

    Very useful app by the way

    Cheers

    Jim

    • admin says:

      Yes, I added descriptions of –csvFacts and –csvFactCols to the command line documentation page (they were in the source code and –help text). There’s an example in the windows build, file exportCsvFromXbrlInstance.bat, including the dimensions, which I’ve cut and pasted below (with some file paths from my laptop, you’d need to edit this if for the Mac):

      rem Export CSV from XBRL Instance with Dimensions
      rem Please edit or adapt to location of instance documents, output files, and Arelle installation
      @set XBRLINSTANCEROOT=C:\Users\Herm Fischer\Documents\mvsl\projects\EuroFiling\CSV converter\taxonomy2\taxonomy\eu
      @set INSTANCEFILE=%XBRLINSTANCEROOT%\instance.xbrl
      @set OUTPUTLOGFILE=%XBRLINSTANCEROOT%\conversion-log.txt
      @set OUTPUTCSVFILE=%XBRLINSTANCEROOT%\converted-instance.csv
      @set ARELLE=c:\Program Files\Arelle\arelleCmdLine.exe
      “%ARELLE%” –file “%INSTANCEFILE%” –csvFactCols “Label unitRef Dec Value EntityScheme EntityIdentifier Period Dimensions” –csvFacts “%OUTPUTCSVFILE%” 1> “%OUTPUTLOGFILE%” 2>&1

  5. Naveen says:

    I am unable to get the –csvTestReport option to work. I tried running the runEFMTests.bat test batch provided against an instance document. It only put out the logFile.

    • admin says:

      The messages log result that you got is what is intended from a single instance document. The test report, on the other hand, is a result from running an XBRL testcases file or testcases index, which usually represents a suite of tests, indicating test case variations, what to load, expected results to pass, etc. If running a whole suite (such as the conformance suite for base spec, XDT, or other), then the test report shows each test, the expected and achieved result, whether the test passes or not, etc. But on a single instance, just the log of messages applies.

  6. Mikey says:

    In the GUI it is possible to switch the label language and also change label format to name and to Standard Label. Is there a way to get all of those formats as separate columns when calling through command-line?

    • admin says:

      yep, easy addition, will also add same to the new web interface (it’s common code), wondering, which views are you asking for first (e.g., fact list, tree views, etc)?

  7. Mikey says:

    I am calling the webserver with …facts?media=xhtml&factListCols=Label,contextRef,unitRef,Dec,Value,EntityScheme,EntityIdentifier,Period,Dimensions. Right now to get the different language labels, I would have to change the system language and call it again.

  8. Carey says:

    Hi, thanks so much for this I am looking to implement a little validation tool, and I have two questions really,
    1. I was wondering if it was possible to validate multiple xbrl instances via the command line?
    2. Currently it takes between 20 to 30 seconds to load each instance and then a few seconds to validate it. Would it be possible to cache the xsd, linkbases, etc and use this to validate xbrl instances via the command line?

    Any help would be brilliant, thanks
    Carey

    • admin says:

      Yes, but in stages of difficulty. It largely depends on your type of instances.

      1) If they are like US-SEC or IFRS, with each instance having independent extensions and linkbases, there is much less to be gained and significant work to preserve in memory what might be shared.

      2) If the instances all have precisely the identical linkbases and taxonomies, then it is quite easy and a lot to gain in speed.

      3) Several others have found that by profiling the validations, we’ve been able to get the speed improved significantly, if the instances can be tested on our end for this, please contact support@arelle.org by private message.

      4) There might be a lot more flexibility in sharing using the web service API, because the command line invocations are separate of each other and don’t share any process or memory. (Or a design could consider a list of multiple instances all sharing the same DTS in one command line call.)

  9. XbrlPassion says:

    error message from bat file:

    Usage: arelleCmdLine.exe [options]

    arelleCmdLine.exe: error: no such option: –csvFactCols

    Any idea?

  10. XbrlPassion says:

    Some more info. I have tried both the 32 and 64 bit version of the Windows Arielle Command Line. Neither of them take the –csvFactList as an option. Any idea when this will be supported?

    Regards,

    Charles

    • admin says:

      In doing the web services API, the file output was enhanced from just csv, to also html, text, xml, and json (based on the output file name extension), so the parameter was renamed from csvFactList to factListCols. The –help option should have had the latest supported by the code, but anyway the web page and bat file needed editing. Thanks for digging into this. The bat files examples and web page have been updated.

  11. Great tool! Works with the DK-GAAP.

    I’ve tested some of the arguments. But it doesn’t work when using:
    arelleCmdLine.exe -f “P:\dcca20120101\20120101\entryDanishGAAPBalanceSheetAccountFormIncomeStatementByFunctionIncludingManagementsReviewStatisticsAndTax20120101.xsd” -i “c:\temp\npn.xbrl” -v –facts “c:\temp\npn.csv” –factListCols “Label Name contextRef unitRef Dec Prec Lang Value EntityScheme EntityIdentifier Period Dimensions”

    The Name colnne is blank! (I assume that “Name” is the name off the Element from the taxonomy.

  12. Hi, I’m having problems running the following command line

    C:\Program Files\Arelle>arelleCmdLine.exe -f “C:\Users\mauricio.ahumada\Desktop\
    XBRL 2012-03\Estados_financieros_(XBRL)60806000_201203.xbrl” -v

    I just got several errors similar to the following:

    [] HTTP Error 404: Not Found
    retrieving http://xbrl.ifrs.org/taxonomy/2011-03-25/generic-link.xsd -
    [FileNotLoadable] File can not be loaded: http://xbrl.ifrs.org/taxonomy/2011-03-
    25/generic-link.xsd – http://xbrl.ifrs.org/taxonomy/2011-03-25/full_ifrs/ifrs_7_
    2011-03-25/gre_ifrs_7_2011-03-25.xml 12
    [] HTTP Error 404: Not Found
    retrieving http://xbrl.ifrs.org/taxonomy/2011-03-25/generic-reference.xsd -
    [FileNotLoadable] File can not be loaded: http://xbrl.ifrs.org/taxonomy/2011-03-
    25/generic-reference.xsd – http://xbrl.ifrs.org/taxonomy/2011-03-25/full_ifrs/if
    rs_7_2011-03-25/gre_ifrs_7_2011-03-25.xml 13

    When using GUI mode, loading and validating XBRL works fine, problem arises from whitin the command line mode.

    Greetings!

    • admin says:

      It looks like these files are not on the ifrs website at this moment. If you have them they could be copied into the cache (on Windows, using tools->internet->manage cache to locate the right directory, or handled by the mappings.xml file in the config directory. (For further help please contact support@arelle.org.)

  13. Mike says:

    Hey, I seem to get a syntax error when trying to run the command line. Any ideas, or would it be something I’m doing wrong?


    mike@mike-VirtualBox:~/arelle/Arelle$ python arelleCmdLine.py
    Traceback (most recent call last):
    File "arelleCmdLine.py", line 10, in
    from arelle import CntlrCmdLine, CntlrComServer
    File "/home/mike/arelle/Arelle/arelle/CntlrCmdLine.py", line 14, in
    from arelle import (Cntlr, FileSource, ModelDocument, XmlUtil, Version,
    File "/home/mike/arelle/Arelle/arelle/Cntlr.py", line 409
    print(logEntry, file=sys.stderr)
    ^
    SyntaxError: invalid syntax

    Thanks.

  14. Nils Wilhelm says:

    Hi there,

    i’ve just tested the –csvPre option and it works just fine. It prints out the standard label. What i would need is the concept name and the namespace to get the exact concept. Would that be possible?

    Thanks for your help in advance

    • admin says:

      for –csvPre, there’s an option to specify custom label roles, and one of those options provides prefixed name instead of label (but not the full namespace). Please try –labelRole XBRL-concept-name. Otherwise we could code a feature to have customizable columns (like the fact list –factListCols) and include a full namespace and name as well.

  15. Hi. First of all, great work, your tool is excellent.

    Im having some trouble getting a validation report when calling Arelle through command line. I dont seem able to find a way to write a report showing the errors the XBRL instance has.

    I execute this

    C:\Program Files\Arelle>arelleCmdLine.exe -f c:\DBNeT\xbrl\out\XBRLMALOS\993010002_201303_I.xbrl -v

    The program finishes and exit with to message, output or anything. I know for certaint the instance has some errors, in fact the graphical version of Arelle detects correctly such inconsistencies.

    Any help?

    Thanks in advance.

  16. Jetnor Arifaj says:

    Hi, Thanks for the very good overview of command line. I am wandering if there is a way to save a view from individual sheets to HTML using command line. The
    reason I ask is because some of the instances are too big to load in the gui version.
    Using the GUI version of Arelle it gives you the option on the
    save as to change the file type to HTML table which is quite useful for viewing. can this be done using the command line? Can you please share an example?
    Many thanks, Jetnor.

    • admin says:

      This depends on what you mean by “sheets”. The link bases can be saved individually. Table link base can be saved to html files (there is an EBA plugin which saves each to a separate file in a batch operation). Please clarify, suggest using support@arelle.org for further details.

  17. Marinos says:

    Sorry for the basicness of the question but i am not experienced with coding and i really need to use arelle…

    I give this simple command in cmd and arelle parses the data i want and stores them in example.html

    python arelleCmdLine.py -f http://www.sec.gov/Archives/edgar/data/1009672/000119312514163161/crr-20140331.xml -v –factTable example.html

    The only problem is that i dont know where arelle saves example.html. Sometimes it saves the file in the C:\a which is the arelle directory on my PC and other times it saves it on the Documents library.

    How can i specify a certain directory where the html files will be stored by modifying this command in the cmd?

    Arelle is Suberb!!!!!
    Thank you for sharing such an amazing tool!

  18. Jason says:

    I am trying to use the command line to automate pulling new files from the Edgar RSS feeds. I can get the zip file from the feed, but I am having trouble figuring out the right arguments to process a zip file through the command line.

    What would a sample command line operation look like using zip files? Or is that even possible?

  19. Marinos says:

    Dear admin

    I am a finance enthusiast and i will like to ask if there is any way to use arelle to import values you choose for example: Import Cash and cash equivalents and its numeric value to a cell that is designated. I am using –factTable to pull all the facts but scanning with Excel so many different document types of companies proves very hard. Is there anyway to immediately import values with arelle to designated cells in arelle.

    Thank you

  20. Marinos says:

    Dear Herm Fischer

    Can i use the amazing function Find [Text (ignore case)] when i am running arelle from source? And save those data inside a csv or html or xls file?

    I couldn’t find it on the documentation.

    Best regards
    Marinos

  21. Fin_Edw says:

    Dear admin

    We are college students in Stockholm. How is it possible to extract one specific piece of data e.g.: Revenues when the corresponding tag in XBRL can vary; it can be: us-gaap:Revenue, us-gaap:SalesRevenueNet, etc. Is there a way to know the number of names that can be used in XBRL for Revenue, Net Income etc from somewhere (before the query) in order to address issues of data transparency?

  22. David Jarvis says:

    Hello there!

    This is quite an impressive endeavor – I’ve been digging into it a little bit and you’ve done a ton of hard work here.

    I’m trying to convert an instance document to json, which I believe I should be able to do a command like the following:

    python3 arelleCmdLine.py -f ../xbrl/data/0001193125-14-058842-xbrl/aol-20131231.xml -v –store-to-XBRL-DB “jsonFile,port,user,password,/Users/davidjarvis/demo,timeout,json”

    However, I get an extremely lengthy stacktrace back:
    [] [Exception] Failed to complete request:
    Type gYear is not supported for json output
    [' File "/Users/davidjarvis/Development/Python/Arelle/arelle/CntlrCmdLine.py", line 830, in run\n pluginXbrlMethod(self, options, modelXbrl)\n', ' File "/Applications/Arelle.app/Contents/MacOS/plugin/xbrlDB/__init__.py", line 205, in xbrlDBCommandLineXbrlRun\n storeIntoDB(dbConnection, modelXbrl)\n', ' File "/Applications/Arelle.app/Contents/MacOS/plugin/xbrlDB/__init__.py", line 166, in storeIntoDB\n result = insertIntoDB(modelXbrl, host=host, port=port, user=user, password=password, database=db, timeout=timeout, product=product, rssItem=rssItem, **kwargs)\n', ' File "/Applications/Arelle.app/Contents/MacOS/plugin/xbrlDB/XbrlSemanticJsonDB.py", line 60, in insertIntoDB\n jsondb.insertXbrl(rssItem=rssItem)\n', ' File "/Applications/Arelle.app/Contents/MacOS/plugin/xbrlDB/XbrlSemanticJsonDB.py", line 276, in insertXbrl\n self.commit(g)\n', ' File "/Applications/Arelle.app/Contents/MacOS/plugin/xbrlDB/XbrlSemanticJsonDB.py", line 230, in commit\n self.execute("Saving RDF Graph", graph=graph)\n', ' File "/Applications/Arelle.app/Contents/MacOS/plugin/xbrlDB/XbrlSemanticJsonDB.py", line 183, in execute\n default=jsonDefaultEncoder)) # might not be unicode in 2.7\n', ' File "/usr/local/Cellar/python3/3.3.4/Frameworks/Python.framework/Versions/3.3/lib/python3.3/json/__init__.py", line 240, in dumps\n **kw).encode(obj)\n', ' File "/usr/local/Cellar/python3/3.3.4/Frameworks/Python.framework/Versions/3.3/lib/python3.3/json/encoder.py", line 193, in encode\n chunks = list(chunks)\n', ' File "/usr/local/Cellar/python3/3.3.4/Frameworks/Python.framework/Versions/3.3/lib/python3.3/json/encoder.py", line 414, in _iterencode\n for chunk in _iterencode_dict(o, _current_indent_level):\n', ' File "/usr/local/Cellar/python3/3.3.4/Frameworks/Python.framework/Versions/3.3/lib/python3.3/json/encoder.py", line 388, in _iterencode_dict\n for chunk in chunks:\n', ' File "/usr/local/Cellar/python3/3.3.4/Frameworks/Python.framework/Versions/3.3/lib/python3.3/json/encoder.py", line 388, in _iterencode_dict\n for chunk in chunks:\n', ' File "/usr/local/Cellar/python3/3.3.4/Frameworks/Python.framework/Versions/3.3/lib/python3.3/json/encoder.py", line 388, in _iterencode_dict\n for chunk in chunks:\n', ' File "/usr/local/Cellar/python3/3.3.4/Frameworks/Python.framework/Versions/3.3/lib/python3.3/json/encoder.py", line 388, in _iterencode_dict\n for chunk in chunks:\n', ' File "/usr/local/Cellar/python3/3.3.4/Frameworks/Python.framework/Versions/3.3/lib/python3.3/json/encoder.py", line 388, in _iterencode_dict\n for chunk in chunks:\n', ' File "/usr/local/Cellar/python3/3.3.4/Frameworks/Python.framework/Versions/3.3/lib/python3.3/json/encoder.py", line 388, in _iterencode_dict\n for chunk in chunks:\n', ' File "/usr/local/Cellar/python3/3.3.4/Frameworks/Python.framework/Versions/3.3/lib/python3.3/json/encoder.py", line 388, in _iterencode_dict\n for chunk in chunks:\n', ' File "/usr/local/Cellar/python3/3.3.4/Frameworks/Python.framework/Versions/3.3/lib/python3.3/json/encoder.py", line 422, in _iterencode\n o = _default(o)\n', ' File "/Applications/Arelle.app/Contents/MacOS/plugin/xbrlDB/XbrlSemanticJsonDB.py", line 133, in jsonDefaultEncoder\n raise TypeError("Type {} is not supported for json output".format(type(obj).__name__))\n'] -

    I’m not sure what it means by “type gYear is not supported for json output” — any insights or potential fixes?

    Thanks a million :)

    – David

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>