Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
I’m working on a prototype for a flexible data model to store time series data in a way that is easy to catalogue, query and filter. Using Pygen both to populate and use the model seems convenient.At its current iteration, I’ve only applied direct relations and (undocumented?) @reverseDirectRelations in the GraphQL schema. I expected to be able do something similar to client.windmill(windfarm="Hornsea 1").blades(limit=-1).sensor_positions(limit=-1).query()as found in the Pygen documentation, but it does not work (my client.windmill analouge has no methods corresponding to its relations). Do I have to use edges instead of relations to query easily and declaratively with Pygen?
I am looking at 127 time series linked to one asset and I want to download the list of these time series as shown in the screenshot below, but this doesn’t appear to be straight forward.The download button circled in blue saves a JSON file linked only to the asset “11. QHP”. Is there a way to spare the user the effort of manually selecting and downloading each of the 127 timeseries and later reassemble them into one table like the one shown in the browser? It is not possible to select and display more than 20 columns in the browser … due to performance issues. This is not critical at this time, but still dissatisfying. I want to download everything wholesale and pick what I need from the list locally. Is there a way around this restriction? Solving issue #1 would remedy also #2, as I’d be able to join the tables locally again. Thanks
No questions yet! Just excited to start the CDF Journey. :)Bert Greeby
As an operator, I want to optimize the number of tasks shown in the space available, so it would be easier to navigate through them. This is a suggestion from Celanese users: to maximize the number of tasks shown on screen. In this screenshot we can only see 3 tasks. It would improve the user experience if there were, for example, the task name and 2 small buttons right beside it, occupying only 1 line. It would make it easier to read the tasks, especially in checklists that have numerous tasks to be filled.
As an operator, I want to filter the tasks on a specific checklist, so it would be easier to find specific tasks in large checklists. This is a suggestion from Celanese users: being able to search for specific tasks. This will be useful specially for large checklists, that the operator has to go through the unit taking readings. Not always they do these readings in a specific path, they walk around randomly and fill the tasks as they go. Finding the tasks is somehow difficult in those situations, they have to keep scrolling and looking for the reading they are taking at the moment.
On the Overview screen, have the option to select multiple registers to Assign or delete will improve the user experience by reducing the time if change multiple Checklists is needed.
There are cases on the checklist tasks execution that technicians and operators need to select more than one option from the Check Items buttons created. For example, we could have multitple reasons for a task be ‘Not Ok’. But currently, the users can only select one button from the Check Items created. Check the example below: All the options are ‘Not Ok’ and in this example, it could be one or the 4 options. We may consider use a ‘Message’ field, but it’s not the ideal because users can insert anything on the Reply field.
I am busy with Cognite Training:Data Engineer Basics > Python SDK Transformations (Hands On)I have completed the ‘1. Environment setup’ without any problems. At the end of that, I have a variable defined However, the next step is to create the database in CDF:This gives the error, which looks like an access error: CogniteAPIError: Unauthorized | code: 401 Please could someone advise me what is to be done to correct?Thank you.Doug
What access capabilities do I need to run transformations as “Current User”. I have a user who don’t see “Run as current user” as in screen shot. Next screen shot is mine, and I can see it probably because I am added as admin
CDF - Filter option is not working as expected under common filters at Data Explorer screen. Login to CDF Click on Data Explorer tab in CDF menu bar. Click on Files tab in right side of the panel. Set Data set as 'src:006:documentum:b60:ds under Common filters in left side of the screen. Select the check box ‘Before’ under common filters in left side of the penal. Click on the Calendar icon and set data as (e.g.) '10-01-2023' Expected results: Document ‘Amarjeet_Test_DT.docx’ should not display in results window because its created after the set date.Actual results: Document Amarjeet_Test_DT.docx is displaying in CDFNote Issue exists for all Date filters like Created time, updated time with Before, After, During in CDF, user want to know what date is used for filtering the documents in CDF with these filters.
This discussion is linked to the course Cognite Data Fusion for Domain Experts.A data-driven approach to maintenance is more efficient than relying on a fixed schedule or fixing something once it’s already broken. There are plenty of opportunities around us to take advantage of smart maintenance. Look around in your home, your office, or think of your work tasks. Where could data-driven maintenance make a difference?Share your thoughts! For example, explain what data you have available or would need in order to do smart maintenance, and how this would be an improvement of current methods.
Hi Everyone! I’m excited to be here to learn more about Cognite’s solutions for industry.
Hello, When I tried to run the DBExtractor, I get the following error: “polars\_cpu_check.py:232: RuntimeWarning: Missing required CPU features.The following required CPU features were not detected: avx, avx2, fmaContinuing to use this version of Polars on this processor will likely result in a crash.Install the `polars-lts-cpu` package instead of `polars` to run Polars with better compatibility.Hint: If you are on an Apple ARM machine (e.g. M1) this is likely due to running Python under Rosetta.It is recommended to install a native version of Python that does not run under Rosetta x86-64 emulation.If you believe this warning to be a false positive, you can set the `POLARS_SKIP_CPU_CHECK` environment variable to bypass this check.”After doing some googling, I was able to install the polars-lts-cpu package referenced using python, but I got the same error. I’m not sure how to make the extractor reference the polars-lts-cpu package when it runs. See attached screenshot.The extractor i
I am facing this error in the data science course on “creating cognite functions” with cognite SDK. In previous courses I had fixed this error by replacing the “datapoints” keyword with “time_series”. However, I would like to know if I am probably not using the write packages; OR not know if the commands are deprecated and have new function names in newer versions. Kindly let me know - I am trying to finish these courses before my local boot camp next week.Thanks,Lavanya
Hi Team, Is there any possibility that I can register for Bootcamp virtually or it can be in India location as I am from India.Also, want to know is the Bootcamp is for individual or group and the cost for attending the Bootcamp? Thanks,Navyasri Indupalli
Hey everyone! My name is Sachin. I am a data professional come data & programming world. I’m excited to be part of this community and I look forward to finding and giving help with everything related to Data & Analytics. I’m here coz I am excited to learn more about Cognite data journey, products & services. I’ve been a data architect & data engineer in financial technology industry for close to 12 years now and look forward to exploring new industry to enhance my knowledge.
This discussion is linked to the course Industrial CanvasCognite’s Industrial Canvas provides a digital whiteboard where contextualized data from various sources can be gathered in a single space. Data such as P&IDs, time series, assets, and events can be imported from a CDF project into the canvas. Having all relevant data in a single digital workspace removes the dependency on manual workflows and opens the door for efficient collaboration between engineers!Think about your everyday workflow, what tasks do you think can be done more efficiently when using Industrial Canvas? Do you often need to gather troubleshooting data into one canvas for easier analysis? Maybe you often need to view timeseries in collaboration with a remote colleague? Share your thoughts with us and get inspired by people from different domains explaining how they would use Industrial Canvas in their everyday work! __Omar | Cognite Academy
Currently, the data model and data model instance actions are limited to read and write capabilities.To enhance clarity and delineate responsibilities within CDF groups more effectively, I propose dividing the existing configuration into distinct categories: read, write/update, and delete.Another enhancement is to disallow the deletion of containers that have associated views. The same principle applies to views; they cannot be deleted if other components (views or data models) are referencing them.
After we released the first version of time series data quality monitoring, we've seen an unprecedented demand for ensuring that time series data is continuously reliable.It's important for end users of apps and dashboards to know when they can -- and importantly when they cannot -- rely on data to make operational decisions. That means that you as a data scientist or application developer need to communicate to the end user what the data quality status is.You can now easily report live data quality status in dashboards and applications using our data quality monitoring service. The monitoring service creates live data quality metrics, available as time series in Cognite Data Fusion, that you can display in your application -- ensuring that end users know the quality of the data that they are using.Try it out! Learn more in our guide to report data quality status in apps and dashboards.To enable this in your project the service account used by the data quality monitoring service needs
Hey team,I was wondering if there is going to be an alignment on what we call this tool across the many places it is referenced? I’ve found these 5 so far:This hub group: CDF Toolkit cdf-tk and project templates github: cdf-project-templates pypi: cognite-toolkit command line: cdf-tkI’m personally a fan of cdf-tk.cognite-toolkit is too broad (doesn’t include InRobot, Maintain, f25e). cdf-project-templates is too many words for a CLI. project-templates is too broad.
Currently on the PI extractor configuration file you can specify the end timestamp for the backfilling of a timeseries by using the “to” parameter on the backfill section of the configuration file. What we would expect is to have something relatively close to this set date. However, the docs say that it can overshoot by the amount of datapoints specified on the filling chunk size, this results in over-ingestion of datapoints. We would like to have a way to control the backfilling more precisely since we currently cannot limit the overshooting of the backfilling without also affecting the front-filling given that both depend on the same data point chunk size parameter (cdf-chunking>data-points).
As beneficial as inspection robots are, they might suffer from limitations if they are not trained in an environment simulating the industrial environment they will be deployed at. Therefore, Cognite gathered forces with Aker Solutions, TESS, and Createc, to build an innovative testing and training facility for inspection robots, called Robot Garden, which will help take robotics' impact on industrial safety and efficiency to new heights. Robot Garden is located at Fornebu, Oslo, and acts as a local test and training facility to train robots in a realistic environment, similar to the environment where they will be deployed, thus enhancing their mission efficiency and helping robot users get the most out of their robot deployments. The main driver behind Robot Garden is to test the robustness of the training AI models in different challenging deployment scenarios such as bad weather, bad lighting, uneven terrain, etc.In addition, TESS provides a simulation panel that can simulate vari
HiThere are some bugs when doing contextualization in the fusion GUI. It should be possible to “select all” when i do a search query in the interactive engineering diagrams contextualization workflow.
We would like to have a contains filter option on columns. This would be useful to search on string type columns and see if it contains some substring, to filter the data properly.
In multiple cases, the validation of a Checklist is not needed and thoses checklists will remain pending, depending on the Team Captain avaliability. And this could also require time from the Team Captain to approve all the pending Checklists.The suggestion is to create an option on the Template creation screen so the user can define if it’s necessary to have an approval or not for the checklists generated from that Template and a possibility to define the group of users that will be able to approve it.As complement, new status for Checklists could also be created to indicate that a Checklist was finished but needs to be approved and a status to signalize the approved.
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.