SignalR using blazor server app – does not hit Hub.OnConnectedAsync function upon new client connection

I’m following this tutorial to create signalR feature in Blazor Server app. Here is the full code _hubConnection = new HubConnectionBuilder() .WithUrl(url) .Build(); I’m supposing when above code executes, it should hit OnConnectedAsync function in Hub class. At least that happens in the sample code given here. But surprisingly that is not happening with my … Read more

Two same api controller methods but different content type

The business request is to enable accepting a request which will have two different types of content/type in its request header. So I tried with [Consumes()] annotation, but swagger throws an error. [ProducesResponseType(StatusCodes.Status200OK)] [ProducesResponseType(StatusCodes.Status400BadRequest)] [ProducesResponseType(StatusCodes.Status500InternalServerError)] [ProducesDefaultResponseType] [Route(nameof(GetXmlInvoice))] //[Consumes(“application/xml”)] [HttpGet] public async Task<IActionResult> GetXmlInvoice([FromQuery] long apiKeyId, [FromQuery] int inId) { //…some code } Approach #1: To … Read more

Why https://console.cron-job.org raise ‘504 Gateway Timeout’

When I do a test call ti my website, got an 504 error in https://console.cron-job.org, why? I even did set up CORS to avaind any issues. Seems that the endpoint is triggered, but then why not response is seen by https://console.cron-job.org? import { NextApiRequest, NextApiResponse } from ‘next’ import { prepareAndSendInvoices } from ‘services/sendInvoices’ import … Read more

How to create dlt streaming live table using python in databricks

I have created a streaming live table using sql as below: CREATE STREAMING LIVE TABLE customers_count_streaming COMMENT “count of customers” TBLPROPERTIES (“myCompanyPipeline.quality” = “gold”) AS SELECT count(*) as customers_count FROM STREAM(LIVE.customers_cleaned) How can I create exact same table but using python @dlt in databricks

Error from adehabitatLT::as.ltraj() because dates are not class POSIXct but str(dates) indicates they are POSIXct

The adehabitatLR::as.ltraj() function calculates animal trajectories. The function requires dates to be class POSIXct. I followed the same steps in the example section of the help document using a play dataset I found online and converted date-time to POSIXct dates but I still get the following error when running the function Error in adehabitatLT::as.ltraj(xy = … Read more

not able to run supervisorctl command as a different user in docker

I have dockerfile with following content FROM 2.dkr.ecr.us-east-1.amazonaws.com/pa-amazonlinux:latest # FROM amazonlinux:2 # install core packages RUN yum update -y && \ amazon-linux-extras install -y python3.8 postgresql14 nginx1 epel && \ yum install -y gcc git openldap-devel openssl-devel crontabs python38-devel mariadb-devel supervisor which tar python2-requests # install extra packages RUN yum install -y libpq-devel libaio ENV … Read more

Define column names when reading a spark dataset in kedro

With kedro, how can I define the column names when reading a spark.SparkDataSet? below my catalog.yaml. user-playlists: type: spark.SparkDataSet file_format: csv filepath: data/01_raw/lastfm-dataset-1K/userid-timestamp-artid-artname-traid-traname.tsv load_args: sep: “\t” header: False # schema: # filepath: conf/base/playlists-schema.json save_args: index: False I have been trying to use the following schema, but it doesn’t seem to be accepted (schema Pleaseprovide a … Read more

How to calculate standard error and CI to plot in R

I am first calculating the percentage of respondents across different demographics who graduated from high school, based on their program status. This code gets me those percents: d_perc <- d %>% group_by(group, levels, program_cat, highschool) %>% summarize(n = n()) %>% mutate(percent = n/sum(n)*100) %>% select(-n) Next, I want to additionally calculate error term around these … Read more