Impact 2024: The Industrial Data and AI Conference for and by Users | Nominate Speakers Now for a Ch...
Currently, the Power BI data connector contains a function to return timeseries aggregations and this function takes a source, start and end dates, and a granularity. It then returns 13 columns of data. My report only uses 3 of those columns: timestamp, timeseriesId, and Average. With 3.4 million rows, this is a huge amount of wasted bandwidth and storage. It also adversely affects the report performance to a small degree.One possible solution to this would be to add a parameter to TimeseriesAggregate() to indicate which columns of data should be returned. Not transmitting the data would save network bandwidth, reduce the size of the report, reduce the time required to load data, and improve the end-user experience.
In order to request aggregates from large views in Data Modeling into PowerBI, it would be beneficial to have the /aggregate endpoint exposed in OData (and PowerBI as a result). Otherwise all the data needs to downloaded to PowerBI first and perform the aggregates locally, this lowers the performance and is not possible for views with millions of instances.
Hello, I am trying to perform a calculation on Charts but it does not work. It is a multiplication of 4 time series, all of them with a resample to granularity to 1m. One of them has real values and 3 of them are 0/1 steps. When I zoom out, I can see the calculation and it shows a pick of values, that is not accurate as it shows average data. Although when I zoon in to see the data, the trend disappears, the error says “One of the time series has less than two values”. As I resampled the data to 1 minute, I was expecting to see one values per minute, but it does not seems to happen. Is there other way to calculate the data correctly? Zoom in:
Create assignment list of Assignees with Discipline Area while assigning checklist to user.
hi,I am trying to access the data science training, and it requires two credits and when hit purchase it is freezing, any help for this matter ?
Hello,I am trying to follow along the first chapter in Data Engineer Basics - Integrate. I am trying to follow the first workbook : Authentication. When I try to install poetry, I get an error below.Can anyone let me know how I may troubleshoot? I am on mac, python 12 and pipx (instead of pip) at ~/Library/Application Support/pypoetry/venv/lib/python3.12/site-packages/poetry/installation/chef.py:164 in _prepare 160│ 161│ error = ChefBuildError("\n\n".join(message_parts)) 162│ 163│ if error is not None: → 164│ raise error from None 165│ 166│ return path 167│ 168│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:Note: This error originates from the build backend, and is likely not a problem with poetry but with pyzmq (25.0.2) not supporting PEP 517 builds. You can verify this by running 'pip wheel --no-cache-dir --use-pep517 "pyzmq (==25.0.2)"'.Th
Hi, Unable to connect CDF data source to Grafana.Please suggest asap on this.Thanks
There is similar question posted in other community. Please refer that.
Is it possible to have a set of pre-built templates for various disciplines or generic use cases, which a new user can directly utilize for preliminary work, until he get trained enough to start building his canvas. Some of the users like to use the industrial canvas like a dashboard and for them having a pre-built template gives an idea of how to use the canvas.
Hi team,There should be a API provision to migrate the pre-existing charts and canvas developed in Dev project to be migrated to Prod. Currently all of them need to be recreated from scratch.
This is a request from customers, where they would like to have additional colours in the palette for industrial canvas.
CDF - Filter option is not working as expected under common filters at Data Explorer screen. Login to CDF Click on Data Explorer tab in CDF menu bar. Click on Files tab in right side of the panel. Set Data set as 'src:006:documentum:b60:ds under Common filters in left side of the screen. Select the check box ‘Before’ under common filters in left side of the penal. Click on the Calendar icon and set data as (e.g.) '10-01-2023' Expected results: Document ‘Amarjeet_Test_DT.docx’ should not display in results window because its created after the set date.Actual results: Document Amarjeet_Test_DT.docx is displaying in CDFNote Issue exists for all Date filters like Created time, updated time with Before, After, During in CDF, user want to know what date is used for filtering the documents in CDF with these filters.
Do you have in the product roadmap to integrate one canvas to another. Such that you can have several canvas at various granular levels ( System view, asset view, platform) and all of them could be connected so that its easy and intuitive for a usecase.
Im trying to create a new function in python, when I run this function from local it works fine but when I run it from cdf service it does not work. I got this error message: CogniteAPIError: <!doctype html><html lang=en><title>500 Internal Server Error</title><h1>Internal Server Error</h1><p>The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.</p> | code: 500 | X-Request-ID: 04b27193-d865-98d5-8a2a-11b94126e8eb
Missing features for granting or excluding access to certain spaces based on a regex pattern. Permissions for data model and data model instances should be divided into read, write, create, edit delete, and update actions by scope.
Imagine the Reporting Site and the Reporting Unit entities, 1 reporting site has N reporting units.Currently, If I am in the Reporting Site entity, there is no way to filter reporting sites based on a reporting unit condition using graphql. Like: Give me reporting sites that have reporting units with the name "CLKVAM" to solve that , we need to go to the reporting unit level and query using a reverse edgewhich makes data modeling and querying more complex ... sometimes we even need to do some transformations to remove duplicated data
Hello, my name is Georges Assaf, and I am joining the cognite hub as a data scientist in the oil & gas industry. looking forward for interacting with community and knowledge sharing via this hub.
Welcome to the CDF Fundamentals Discussion!This discussion is dedicated to help learners of the Cognite Data Fusion Fundamentals learning path succeed. If you’re struggling with the exercises in this learning path, try the tips & tricks below or post a comment with the challenge you’re facing. You can also post your own tips and respond to fellow learners’ questions. Cognite Academy’s instructors are also here to help.
I am doing CDF fundamentals hands On , module Transform Data I am receiving error .Can someone help.RegardsAmelia
I am creating a transformation where I am joining data from two different views/containers, where table B has a node reference to table A. I have tried to find documentation for this, but I have not found any so far. Through friends and trial and error I have found two options, and neither seems to be performing well. Example:Type A { Name: String}Type B{ Name: String A_ref: A}What I have found as possible solutions are to go through cdf_data_models and picking the externalId from the node-reference:from cdf_data models(<spc>, <mod>,<ver>, “A”) as A join cdf_… as B on A.external_id = B.A_ref.externalIdSame as above, but join on a as a nodereference: on B.A_ref = node_reference(<spc>, A) Neither of these options seems to be documented anywhere, and I can’t find any other ways documented. The performance when reading when using these joins seems slow, even though I have set up a few indexes which should cover the different joins. This is also slower then
Below we have outlined several frequently asked questions about Cognite Learn and their corresponding answers.Don’t see the answer to your question? Post as a reply in this thread and we’ll be sure to answer and/or add it to the FAQ list below! Protip: Use [cmd+f] or [cntrl+f] to search for keywords related to your question. FAQs How can I sign up to Cognite Academy?Follow the steps in this guide. If you have already signed up for Cognite Hub, you can sign in to Cognite Learn with the same email and password. Where can I see my course progress?All your certificates and course progress are available under My Overview from the top navigation. Where can I find my certificate?All your certificates and course progress are available under My Overview from the top navigation. How can I watch a video if I get an error saying ‘This video cannot be played. (Error code:232011)’?This error is related to your browser. Make sure that your browser is up-to-date. You can find the list of supported bro
which types of files supports (read directly) for P&ID and 3D model with CDF ?
The ability to pin latest values to a drawing in canvas is fantastic, but on most our assets we have more than one timeseries linked to that equipment. Is it possible to add the ability to pin several timeseries to the same equipment. Also to pin additional timeseries that is not contextualized to any asset on top of a document/file/drawing.In addition for consumers of the content the context of the value is needed, so we need the ability to show units like in the old infographics.
What are the permissible size of file can upload in CDF ? Which type of P&ID files are directly read in CDF ? Which type of 3D Model files are directly read in CDF ? Is any converter tool is available in CDF for conversion of file in to require file format for P&ID and 3D model in cognite data fusion ?
Already have an account? Login
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.
Sorry, we're still checking this file's contents to make sure it's safe to download. Please try again in a few minutes.
Sorry, our virus scanner detected that this file isn't safe to download.