diff --git a/antora.yml b/antora.yml index 6a1fc581..f4a6c756 100644 --- a/antora.yml +++ b/antora.yml @@ -12,3 +12,8 @@ asciidoc: pulsar-version: '2.10' admin-console-version: '2.0.4' heartbeat-version: '1.0.12' + starlight-kafka: 'Starlight for Kafka' + starlight-rabbitmq: 'Starlight for RabbitMQ' + pulsar-reg: 'Apache Pulsar(TM)' + pulsar: 'Apache Pulsar' + pulsar-short: 'Pulsar' \ No newline at end of file diff --git a/modules/ROOT/nav.adoc b/modules/ROOT/nav.adoc index d9bf6e05..3abe4b68 100644 --- a/modules/ROOT/nav.adoc +++ b/modules/ROOT/nav.adoc @@ -13,8 +13,7 @@ * xref:components:heartbeat-vm.adoc[] * xref:components:pulsar-beam.adoc[] * xref:components:pulsar-sql.adoc[] -* xref:components:starlight-for-kafka.adoc[] -* xref:components:starlight-for-rabbitmq.adoc[] +* xref:components:starlight.adoc[] .Operations * xref:operations:auth.adoc[] diff --git a/modules/ROOT/pages/faqs.adoc b/modules/ROOT/pages/faqs.adoc index d91afb83..d3641d4a 100644 --- a/modules/ROOT/pages/faqs.adoc +++ b/modules/ROOT/pages/faqs.adoc @@ -1,123 +1,110 @@ = Luna Streaming FAQs :navtitle: FAQs -If you are new to DataStax Luna Streaming and its Apache Pulsar enhancements, these FAQs are for you. +If you are new to {company} Luna Streaming and its {pulsar} enhancements, these FAQs are for you. -== Introduction +== What is {company} Luna Streaming? -=== What is DataStax Luna Streaming? +{company} Luna Streaming is a new Kubernetes-based distribution of {pulsar}, based on the technology that https://kesque.com/[Kesque] built to run its {pulsar-short}-as-a-service. -DataStax Luna Streaming is a new Kubernetes-based distribution of Apache Pulsar, based on the technology that https://kesque.com/[Kesque] built to run its Pulsar-as-a-service. +== What components and features are provided by {company} Luna Streaming? -=== What components and features are provided by DataStax Luna Streaming? - -In addition to Apache Pulsar itself, DataStax Luna Streaming provides: +In addition to {pulsar} itself, {company} Luna Streaming provides: * An installer that can stand up a dev or production cluster on bare metal or VMs without a pre-existing Kubernetes environment -* A helm chart that can deploy and manage Pulsar on your current Kubernetes infrastructure +* A Helm chart that can deploy and manage {pulsar-short} on your current Kubernetes infrastructure * Cassandra, Elastic, Kinesis, Kafka, and JDBC connectors * A management dashboard * A monitoring and alerting system -=== On which version of Apache Pulsar is DataStax Luna Streaming based? +== On which version of {pulsar} is {company} Luna Streaming based? -DataStax Luna Streaming {luna-version} is based on its distribution of Apache Pulsar {pulsar-version}, plus features and additional enhancements from DataStax contributors. +{company} Luna Streaming {luna-version} is based on its distribution of {pulsar} {pulsar-version}, plus features and additional enhancements from {company} contributors. -=== What does DataStax Luna Streaming provide that I cannot get with open-source Apache Pulsar? +== What does {company} Luna Streaming provide that I cannot get with open-source {pulsar}? -DataStax Luna Streaming is a hardened version of Apache Pulsar that been run through additional testing to ensure it is ready for production use. It also includes additional tooling to help monitor your system, including an enhanced Admin Console and a Heartbeat service to monitor the system health. +{company} Luna Streaming is a hardened version of {pulsar} that been run through additional testing to ensure it is ready for production use. It also includes additional tooling to help monitor your system, including an enhanced Admin Console and a Heartbeat service to monitor the system health. -=== Is DataStax Luna Streaming an open-source project? +== Is {company} Luna Streaming an open-source project? -Yes, DataStax Luna Streaming is open source. See the <>. +Yes, {company} Luna Streaming is open source. See the <>. -=== Which Kubernetes platforms are supported by DataStax Luna Streaming? +== Which Kubernetes platforms are supported by {company} Luna Streaming? They include Minikube, K8d, Kind, Google Kubernetes Engine (GKE), Microsoft Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS), and other commonly used platforms. [#gitHubRepos] -=== Where are the DataStax Luna Streaming public GitHub repos? +== Where are the {company} Luna Streaming public GitHub repos? There are several public repos, each with a different purpose. See: * https://github.com/datastax/pulsar[https://github.com/datastax/pulsar] : This is the distro repo (a fork of apache/pulsar). -* https://github.com/datastax/pulsar-admin-console[https://github.com/datastax/pulsar-admin-console] : This is the repo for the Pulsar admin console, which allows for the configuration and monitoring of Pulsar. -* https://github.com/datastax/pulsar-heartbeat[https://github.com/datastax/pulsar-heartbeat] : This is a monitoring/observability tool for Pulsar that tracks the health of the cluster and can generate alerts in Slack and OpsGenie. -* https://github.com/datastax/pulsar-helm-chart[https://github.com/datastax/pulsar-helm-chart] : This is the Helm chart for deploying the DataStax Pulsar Distro in an existing Kubernetes cluster. -* https://github.com/datastax/pulsar-sink[https://github.com/datastax/pulsar-sink] : This is the DataStax Apache Pulsar Connector (`pulsar-sink` for Cassandra) repo. -* https://github.com/datastax/burnell[https://github.com/datastax/burnell] : This is a utility for Pulsar that provides various functions, such as key initialization for authentication, and JWT token creation API. - -== Installation +* https://github.com/datastax/pulsar-admin-console[https://github.com/datastax/pulsar-admin-console] : This is the repo for the {pulsar-short} admin console, which allows for the configuration and monitoring of {pulsar-short}. +* https://github.com/datastax/pulsar-heartbeat[https://github.com/datastax/pulsar-heartbeat] : This is a monitoring/observability tool for {pulsar-short} that tracks the health of the cluster and can generate alerts in Slack and OpsGenie. +* https://github.com/datastax/pulsar-helm-chart[https://github.com/datastax/pulsar-helm-chart] : This is the Helm chart for deploying the {company} {pulsar-short} Distro in an existing Kubernetes cluster. +* https://github.com/datastax/pulsar-sink[https://github.com/datastax/pulsar-sink] : This is the {company} {pulsar} Connector (`pulsar-sink` for Cassandra) repo. +* https://github.com/datastax/burnell[https://github.com/datastax/burnell] : This is a utility for {pulsar-short} that provides various functions, such as key initialization for authentication, and JWT token creation API. -=== Is there a prerequisite version of Java needed for the DataStax Luna Streaming installation? +== Is there a prerequisite version of Java needed for the {company} Luna Streaming installation? -The DataStax Luna Streaming distribution is designed for Java 11. However, because the product releases Docker images, you do not need to install Java (8 or 11) in advance. Java 11 is bundled in the Docker image. +The {company} Luna Streaming distribution is designed for Java 11. However, because the product releases Docker images, you do not need to install Java (8 or 11) in advance. Java 11 is bundled in the Docker image. -=== What are the install options for DataStax Luna Streaming? +== What are the install options for {company} Luna Streaming? -* Use the Helm chart provided at https://github.com/apache/pulsar-helm-chart[https://github.com/datastax/pulsar-helm-chart] to install DataStax Luna Streaming in an existing Kubernetes cluster on your laptop or hosted by a cloud provider. -* Use the tarball provided at https://github.com/datastax/pulsar/releases[https://github.com/datastax/pulsar/releases] to install DataStax Luna Streaming on a server or VM. -* Use the DataStax Ansible scripts provided at https://github.com/datastax/pulsar-ansible[https://github.com/datastax/pulsar-ansible] to install DataStax Luna Streaming on a server or VM with our provided playbooks. +* Use the Helm chart provided at https://github.com/apache/pulsar-helm-chart[https://github.com/datastax/pulsar-helm-chart] to install {company} Luna Streaming in an existing Kubernetes cluster on your laptop or hosted by a cloud provider. +* Use the tarball provided at https://github.com/datastax/pulsar/releases[https://github.com/datastax/pulsar/releases] to install {company} Luna Streaming on a server or VM. +* Use the {company} Ansible scripts provided at https://github.com/datastax/pulsar-ansible[https://github.com/datastax/pulsar-ansible] to install {company} Luna Streaming on a server or VM with our provided playbooks. -=== How do I install DataStax Luna Streaming in my Kubernetes cluster? +== How do I install {company} Luna Streaming in my Kubernetes cluster? Follow the full instructions in xref:install-upgrade:quickstart-helm-installs.adoc[Quick Start for Helm Chart installs]. -=== How do I install DataStax Luna Streaming on my server or VM? +== How do I install {company} Luna Streaming on my server or VM? Follow the full instructions in xref:install-upgrade:quickstart-server-installs.adoc[Quick Start for Server/VM installs]. -== What task can I perform in the DataStax Luna Streaming Admin Console? +== What task can I perform in the {company} Luna Streaming Admin Console? From the Admin Console, you can: -* Add and run Pulsar clients +* Add and run {pulsar-short} clients * Establish credentials for secure connections * Define topics that can be published for streaming apps -* Set up Pulsar sinks that publish topics and make them available to subscribers, such as for a Cassandra database table -* Control namespaces used by Pulsar +* Set up {pulsar-short} sinks that publish topics and make them available to subscribers, such as for a Cassandra database table +* Control namespaces used by {pulsar-short} * Use the Admin API -== What is Pulsar Heartbeat? - -https://github.com/datastax/pulsar-heartbeat[Pulsar Heartbeat] monitors the availability, tracks the performance, and reports failures of the Pulsar cluster. It produces synthetic workloads to measure end-to-end message pubsub latency. Pulsar Heartbeat is a cloud-native application that can be installed by Helm within the Pulsar Kubernetes cluster. - -== What is Prometheus? +== What is {pulsar-short} Heartbeat? -https://prometheus.io/docs/introduction/overview/[Prometheus] is an open-source tool to collect metrics on a running app, providing real-time monitoring and alerts. +https://github.com/datastax/pulsar-heartbeat[{pulsar-short} Heartbeat] monitors the availability, tracks the performance, and reports failures of the {pulsar-short} cluster. It produces synthetic workloads to measure end-to-end message pubsub latency. {pulsar-short} Heartbeat is a cloud-native application that can be installed by Helm within the {pulsar-short} Kubernetes cluster. -== What is Grafana? +== What are the features provided by {company} {pulsar} Connector (`pulsar-sink`) that are not supported in `kafka-sink`? -https://grafana.com/[Grafana] is a visualization tool that helps you make sense of metrics and related data coming from your apps via Prometheus, for example. +The https://pulsar.apache.org/docs/en/io-overview/[{pulsar-short} IO framework] provides many features that are not possible in Kafka, and has different compression formats and auth/security features. The features are handled by {pulsar-short}. For more, see xref:operations:io-connectors.adoc[Luna Streaming IO Connectors]. -== Pulsar Connector +The {company} {pulsar} Connector allows single-record acknowledgement and negative acknowledgements. -=== What are the features provided by DataStax Apache Pulsar Connector (`pulsar-sink`) that are not supported in `kafka-sink`? - -The https://pulsar.apache.org/docs/en/io-overview/[Pulsar IO framework] provides many features that are not possible in Kafka, and has different compression formats and auth/security features. The features are handled by Pulsar. For more, see xref:operations:io-connectors.adoc[Luna Streaming IO Connectors]. - -The DataStax Apache Pulsar Connector allows single-record acknowledgement and negative acknowledgements. - -=== What features are missing in DataStax Apache Pulsar Connector (`pulsar-sink`) compared with `kafka-sink`? +== What features are missing in {company} {pulsar} Connector (`pulsar-sink`) compared with `kafka-sink`? * No support for `tinyint` (`int8bit`) and `smallint` (`int16bit`). -* The key is always a String, but you can write JSON inside it; the support is implemented in pulsar-sink, but not in Pulsar IO. +* The key is always a String, but you can write JSON inside it; the support is implemented in pulsar-sink, but not in {pulsar-short} IO. * The “value” of a “message property” is always a String; for example, you cannot map the message property to `__ttl` or to `__timestamp`. * Field names inside structures must be valid for Avro, even in case of JSON structures. For example, field names like `Int.field` (with dot) or `int field` (with space) are not valid. -=== How is DataStax Apache Pulsar Connector distributed? +== How is {company} {pulsar} Connector distributed? There are two packages: -* The `pulsar-sink` functionality of DataStax Apache Pulsar Connector is included with DataStax Luna Streaming. It's built in! -* You can optionally download the DataStax Apache Pulsar Connector tarball from the https://downloads.datastax.com/#pulsar-sink[DataStax Downloads] site, and then use it as its own product with your open-source Apache Pulsar install. - -If you're using open-source software (OSS) Apache Pulsar, you can use DataStax Apache Pulsar Connector with the OSS to take advantage of this `pulsar-sink` for Cassandra. See the xref:pulsar-connector:ROOT:index.adoc[DataStax Apache Pulsar Connector documentation]. +* The `pulsar-sink` functionality of {company} {pulsar} Connector is included with {company} Luna Streaming. It's built in! +* You can optionally download the {company} {pulsar} Connector tarball from the https://downloads.datastax.com/#pulsar-sink[{company} Downloads] site, and then use it as its own product with your open-source {pulsar} install. -== APIs +If you're using open-source software (OSS) {pulsar}, you can use {company} {pulsar} Connector with the OSS to take advantage of this `pulsar-sink` for Cassandra. See the xref:pulsar-connector:ROOT:index.adoc[{company} {pulsar} Connector documentation]. -=== What client APIs does DataStax Luna Streaming provide? +== What is the {company} Change Data Capture (CDC) for Cassandra connector? -The same as for Apache Pulsar. See https://pulsar.apache.org/docs/en/client-libraries/. +This source connector streams data changes from Cassandra tables to Pulsar topics. +For more information, see the xref:cdc-for-cassandra:ROOT:index.adoc[{company} CDC for Cassandra connector documentation]. +== What client APIs does {company} Luna Streaming provide? +The same as for {pulsar}. See https://pulsar.apache.org/docs/en/client-libraries/. \ No newline at end of file diff --git a/modules/ROOT/pages/index.adoc b/modules/ROOT/pages/index.adoc index 05ee555f..3aa200a6 100644 --- a/modules/ROOT/pages/index.adoc +++ b/modules/ROOT/pages/index.adoc @@ -1,27 +1,27 @@ -= Welcome to DataStax Luna Streaming += Welcome to {company} Luna Streaming :navtitle: Luna Streaming -DataStax Luna Streaming is a production-ready distribution of Apache Pulsar built to run seamlessly on any CNCF conformant version of Kubernetes. DataStax Luna Streaming provides all of the core capabilities included in the Apache Community version of Apache Pulsar, plus a number of additional tools and features to facilitate administration and operational tasks associated with running Apache Pulsar in production. +{company} Luna Streaming is a production-ready distribution of {pulsar} built to run seamlessly on any CNCF conformant version of Kubernetes. {company} Luna Streaming provides all of the core capabilities included in the Apache Community version of {pulsar}, plus a number of additional tools and features to facilitate administration and operational tasks associated with running {pulsar} in production. == Release notes -The latest release of DataStax Luna Streaming is {luna-version}, which matches the supported, distributed Pulsar version numbers. +The latest release of {company} Luna Streaming is {luna-version}, which matches the supported, distributed {pulsar-short} version numbers. -The prior Luna Streaming release (numbered 1.0.x or 2.7.2) provided the Pulsar 2.7.2 distribution. +The prior Luna Streaming release (numbered 1.0.x or 2.7.2) provided the {pulsar-short} 2.7.2 distribution. -Refer to the DataStax Luna Streaming https://github.com/datastax/release-notes/blob/master/Luna_Streaming_2.8_Release_Notes.md[release notes], which are hosted in our public GitHub repo, for information & linked commit IDs that were implemented in the latest Luna Streaming {luna-version} release. +Refer to the {company} Luna Streaming https://github.com/datastax/release-notes/blob/master/Luna_Streaming_2.8_Release_Notes.md[release notes], which are hosted in our public GitHub repo, for information & linked commit IDs that were implemented in the latest Luna Streaming {luna-version} release. == Components -In addition to the distribution of https://pulsar.apache.org/en/versions/[Apache Pulsar {pulsar-version}], DataStax Luna Streaming provides: +In addition to the distribution of https://pulsar.apache.org/en/versions/[{pulsar} {pulsar-version}], {company} Luna Streaming provides: -* A xref:install-upgrade:quickstart-helm-installs.adoc[Helm chart] that deploys and manages Pulsar on your current CNCF-conformant Kubernetes infrastructure +* A xref:install-upgrade:quickstart-helm-installs.adoc[Helm chart] that deploys and manages {pulsar-short} on your current CNCF-conformant Kubernetes infrastructure * Cassandra, Elastic, Kinesis, Kafka, and JDBC xref:operations:io-connectors.adoc[connectors] -* xref:components:admin-console-vm.adoc[Pulsar Admin Console] for simplified administration of your Pulsar environment +* xref:components:admin-console-vm.adoc[{pulsar-short} Admin Console] for simplified administration of your {pulsar-short} environment -* xref:components:heartbeat-vm.adoc[Pulsar Heartbeat] to observe and monitor your Pulsar instances +* xref:components:heartbeat-vm.adoc[{pulsar-short} Heartbeat] to observe and monitor your {pulsar-short} instances == Features @@ -57,5 +57,5 @@ In addition to the distribution of https://pulsar.apache.org/en/versions/[Apache * If you have an existing Kubernetes environment, deploy Luna Streaming with a xref:install-upgrade:quickstart-helm-installs.adoc[Helm Installation]. * If you have a bare metal or a cloud environment, see xref:install-upgrade:quickstart-server-installs.adoc[Server/VM Installation]. -* If you want to learn about monitoring with Pulsar Heartbeat, see xref:components:pulsar-monitor.adoc[Pulsar Heartbeat]. +* If you want to learn about monitoring with {pulsar-short} Heartbeat, see xref:components:pulsar-monitor.adoc[{pulsar-short} Heartbeat]. * If you have questions about Luna Streaming, see xref::faqs.adoc[Luna Streaming FAQs]. \ No newline at end of file diff --git a/modules/ROOT/partials/install-helm.adoc b/modules/ROOT/partials/install-helm.adoc index f6ddb383..11b558cf 100644 --- a/modules/ROOT/partials/install-helm.adoc +++ b/modules/ROOT/partials/install-helm.adoc @@ -1,4 +1,4 @@ -. Add the DataStax Helm chart repo to your Helm store: +. Add the {company} Helm chart repo to your Helm store: + [source,shell] ---- @@ -6,7 +6,7 @@ helm repo add datastax-pulsar https://datastax.github.io/pulsar-helm-chart ---- . Install the Helm chart using a minimalist values file. -This command creates a Helm release named "my-pulsar-cluster" using the DataStax Luna Helm chart, within the K8s namespace "datastax-pulsar". +This command creates a Helm release named "my-pulsar-cluster" using the {company} Luna Helm chart, within the K8s namespace "datastax-pulsar". The minimal cluster creates only the essential components and has no ingress or load balanced services. + [source,shell,subs="attributes+"] diff --git a/modules/ROOT/partials/manually-create-credentials.adoc b/modules/ROOT/partials/manually-create-credentials.adoc index 715be2b8..799dbb3e 100644 --- a/modules/ROOT/partials/manually-create-credentials.adoc +++ b/modules/ROOT/partials/manually-create-credentials.adoc @@ -1,7 +1,7 @@ A number of values need to be stored in secrets prior to enabling token-based authentication. -. Generate a key-pair for signing the tokens using the Pulsar tokens command: +. Generate a key-pair for signing the tokens using the {pulsar-short} tokens command: + [source,bash] ---- diff --git a/modules/ROOT/partials/port-forward-web-service.adoc b/modules/ROOT/partials/port-forward-web-service.adoc deleted file mode 100644 index d90c57d0..00000000 --- a/modules/ROOT/partials/port-forward-web-service.adoc +++ /dev/null @@ -1,6 +0,0 @@ -In a new terminal, port forward Pulsar's admin service: - -[source,shell] ----- -kubectl port-forward -n datastax-pulsar service/pulsar-broker 8080:8080 ----- \ No newline at end of file diff --git a/modules/components/pages/admin-console-tutorial.adoc b/modules/components/pages/admin-console-tutorial.adoc index 3ec3d15b..5fd75dce 100644 --- a/modules/components/pages/admin-console-tutorial.adoc +++ b/modules/components/pages/admin-console-tutorial.adoc @@ -1,6 +1,6 @@ -= Pulsar Admin Console += {pulsar-short} Admin Console -The *DataStax Admin Console for Apache Pulsar(R)* is a web-based UI from DataStax that administers topics, namespaces, sources, sinks, and various aspects of Apache Pulsar features. +The *{company} Admin Console for {pulsar-reg}* is a web-based UI from {company} that administers topics, namespaces, sources, sinks, and various aspects of {pulsar} features. * xref:components:admin-console-tutorial.adoc#getting-started[] * xref:components:admin-console-tutorial.adoc#features[] @@ -11,11 +11,11 @@ The *DataStax Admin Console for Apache Pulsar(R)* is a web-based UI from DataSta * xref:components:admin-console-tutorial.adoc#video[] [#getting-started] -== Getting Started in Pulsar Admin Console +== Getting Started in {pulsar-short} Admin Console -In the *Luna Streaming Pulsar Admin Console*, you can use Pulsar clients to send and receive pub/sub messages. +In the *Luna Streaming {pulsar-short} Admin Console*, you can use {pulsar-short} clients to send and receive pub/sub messages. -If you installed the Admin console with the xref:install-upgrade:quickstart-helm-installs.adoc[DataStax Helm chart], access the Admin console with the `pulsar-adminconsole` external load balancer endpoint in your cloud provider: +If you installed the Admin console with the xref:install-upgrade:quickstart-helm-installs.adoc[{company} Helm chart], access the Admin console with the `pulsar-adminconsole` external load balancer endpoint in your cloud provider: image::GCP-all-pods.png[GCP Pods] @@ -24,9 +24,9 @@ Log in with username `admin`. If you're running a xref:install-upgrade:quickstart-server-installs.adoc[server or VM] deployment, see xref:admin-console-vm.adoc[Admin Console on Server/VM] for instructions on deploying and accessing the Admin console. [#features] -== Pulsar Admin Console features +== {pulsar-short} Admin Console features -To try out your service, use the built-in WebSocket test clients on the Pulsar Admin Console's *Test Clients* page. +To try out your service, use the built-in WebSocket test clients on the {pulsar-short} Admin Console's *Test Clients* page. To see currently available namespaces, go to *Namespaces*, or select the button in the upper right corner. @@ -39,11 +39,11 @@ image::luna-streaming-admin-console.png[Luna Streaming Admin Console] For interactive code samples, go to *Code Samples*. [#send-receive] -== Sending and receiving Pulsar messages +== Sending and receiving {pulsar-short} messages -Go to the Pulsar Admin Console's **Test Clients** page. The quickest way to try your service is to use the test clients and send messages from one client to the other. +Go to the {pulsar-short} Admin Console's **Test Clients** page. The quickest way to try your service is to use the test clients and send messages from one client to the other. -In the WebSocket Test Client 1 section, click **Connect**. This action creates a connection from the Pulsar Admin Console that's running in your browser to the Pulsar instance on your server. +In the WebSocket Test Client 1 section, click **Connect**. This action creates a connection from the {pulsar-short} Admin Console that's running in your browser to the {pulsar-short} instance on your server. Scroll down to the Consume tab. In this simple example, which verifies that the service is running properly, add a `hello world` message and click Send. Example: @@ -51,7 +51,7 @@ image::test-message.png[Send a message using a test client] In doing so, you published a message to your server, and in the Test Client you're listening to your own topic. -Your client is working with the Pulsar server. +Your client is working with the {pulsar-short} server. [#create-topics] == Create new topics and tenants @@ -71,18 +71,18 @@ To see detailed information about your topics, go to *Topics*. [#code-samples] == Code samples -On the Pulsar Admin Console's *Code Samples* page, there are examples for Java, Python, Golang, Node.js, WebSocket, and HTTP clients. +On the {pulsar-short} Admin Console's *Code Samples* page, there are examples for Java, Python, Golang, Node.js, WebSocket, and HTTP clients. Each example shows Producer, Consumer, and Reader code, plus language-specific examples of setting project properties and dependencies. -For example, selecting Java will show you how to connect your Java project to Pulsar by modifying your Maven's `pom.xml` file. +For example, selecting Java will show you how to connect your Java project to {pulsar-short} by modifying your Maven's `pom.xml` file. [#connect-to-pulsar] -== Connecting to Pulsar +== Connecting to {pulsar-short} -This section describes how to connect Pulsar components to the Admin console. +This section describes how to connect {pulsar-short} components to the Admin console. === Creating and showing credentials -When connecting clients, you'll need to provide your connect token to identify your account. In the Pulsar APIs, you specify the token when creating the client object. The token is your password to your account, so keep it safe. +When connecting clients, you'll need to provide your connect token to identify your account. In the {pulsar-short} APIs, you specify the token when creating the client object. The token is your password to your account, so keep it safe. The code samples automatically add your client token as part of the source code for convenience. However, a more secure practice would be to read the token from an environment variable or a file. @@ -92,7 +92,7 @@ If you previously created a token, use the Credentials page to get its value. === Connecting Clients -To connect using the Pulsar binary protocol, use the following URL format with port 6651: +To connect using the {pulsar-short} binary protocol, use the following URL format with port 6651: `pulsar+ssl://:6651` @@ -120,7 +120,7 @@ For example: `https://ip-10-101-32-250.srv101.dsinternal.org:8085` -=== Connect to Pulsar admin API +=== Connect to {pulsar-short} admin API To connect to the admin API, use the following URL format with port 8443: @@ -143,17 +143,17 @@ pulsar-admin --admin-url https://ip-10-101-32-250.srv101.dsinternal.org:8443 \ --auth-params file:///token.jwt ---- -You can get the token from the Pulsar Admin Console's *Credentials* page. +You can get the token from the {pulsar-short} Admin Console's *Credentials* page. Alternatively, you can save the URL authentication parameters in your `client.conf` file. [#video] == Admin console video -You can also follow along with this video from our *Five Minutes About Pulsar* series to get started with the admin console. +You can also follow along with this video from our *Five Minutes About {pulsar-short}* series to get started with the admin console. video::1IwblLfPiPQ[youtube, list=PL2g2h-wyI4SqeKH16czlcQ5x4Q_z-X7_m] == Next steps -For more on building and running a standalone Pulsar Admin console, see the xref:admin-console-vm.adoc[Admin Console on Server/VM] or the Pulsar Admin console repo https://github.com/datastax/pulsar-admin-console#dev[readme]. \ No newline at end of file +For more on building and running a standalone {pulsar-short} Admin console, see the xref:admin-console-vm.adoc[Admin Console on Server/VM] or the {pulsar-short} Admin console repo https://github.com/datastax/pulsar-admin-console#dev[readme]. \ No newline at end of file diff --git a/modules/components/pages/admin-console-vm.adoc b/modules/components/pages/admin-console-vm.adoc index 853b6819..a0241a48 100644 --- a/modules/components/pages/admin-console-vm.adoc +++ b/modules/components/pages/admin-console-vm.adoc @@ -1,6 +1,6 @@ -= Install Pulsar Admin Console on Server/VM += Install {pulsar-short} Admin Console on Server/VM -*Pulsar Admin Console* is a web-based UI that administrates topics, namespaces, sources, sinks and various aspects of Apache Pulsar(TM) features. +*{pulsar-short} Admin Console* is a web-based UI that administrates topics, namespaces, sources, sinks and various aspects of {pulsar-reg} features. The Admin Console is a VueJS application that runs in a browser. It also includes a web server that serves up the files for the Admin Console as well as providing configuration and authentication services. @@ -15,7 +15,7 @@ This document covers: * <> [#install] -== Install Pulsar Admin Console +== Install {pulsar-short} Admin Console . Ensure Node version 14.18 or higher is installed. You can find the most recent Node release https://nodejs.org/en/download/[here], or use wget: + @@ -25,7 +25,7 @@ wget https://nodejs.org/dist/v14.18.3/node-v14.18.3-linux-x64.tar.xz / tar -xf node-v14.18.3-linux-x64.tar.xz ---- -. Download and install the Pulsar Admin console tarball to the VM. You can find the most recent Pulsar Admin Console release https://github.com/datastax/pulsar-admin-console/releases[here]. +. Download and install the {pulsar-short} Admin console tarball to the VM. You can find the most recent {pulsar-short} Admin Console release https://github.com/datastax/pulsar-admin-console/releases[here]. .. The tarball is also available with `wget`: + @@ -57,11 +57,11 @@ Port 6454 is specified in `pulsar-admin-console/config/default.json`. To change [#configuration] == Configuration -The `default.json` configuration file contains a set of general configs for the Admin Console, plus a server-specific set under `server_config`. The Admin Console server proxies all requests from the Admin Console to the Pulsar broker (or Pulsar proxy). +The `default.json` configuration file contains a set of general configs for the Admin Console, plus a server-specific set under `server_config`. The Admin Console server proxies all requests from the Admin Console to the {pulsar-short} broker (or {pulsar-short} proxy). You can modify the configuration for the Admin Console in `pulsar-admin-console/config/default.json`, or place additional configuration files (for example, `local.json`) in the `/config` subdirectory to override parameters. -You need to configure `pulsar_url` to point to one of your brokers or a proxy/loadbalancer in front of the brokers (can be Pulsar proxy). The Admin Console server must be able to directly reach each broker by the IP/hostname that is returned by the Pulsar CLI command `pulsar-admin brokers list `. +You need to configure `pulsar_url` to point to one of your brokers or a proxy/loadbalancer in front of the brokers (can be {pulsar-short} proxy). The Admin Console server must be able to directly reach each broker by the IP/hostname that is returned by the {pulsar-short} CLI command `pulsar-admin brokers list `. [NOTE] ==== @@ -79,24 +79,24 @@ These values can be modified in the JSON configuration file. |=== |Setting | Default | Description -| api_version | 2.8.3 | Version of the Pulsar client API to recommend under Samples. +| api_version | 2.8.3 | Version of the {pulsar-short} client API to recommend under Samples. | auth_mode | none | Authentication mode. One of `none`, `user`, `k8s`, or `openidconnect`. See <> for details. | ca_certificate | | String of CA certificate to display in the console under Credentials. -| clients_disabled | false | Disable test clients. Test clients depend on WebSocket proxy, so if this is not running in Pulsar cluster you may want to disable them. -| cluster_name | standalone | Name of Pulsar cluster connecting to. The cluster name can be retrieved with the CLI command `pulsar-admin clusters list`. +| clients_disabled | false | Disable test clients. Test clients depend on WebSocket proxy, so if this is not running in {pulsar-short} cluster you may want to disable them. +| cluster_name | standalone | Name of {pulsar-short} cluster connecting to. The cluster name can be retrieved with the CLI command `pulsar-admin clusters list`. | functions_disabled | false | If functions are not enabled in the cluster, disable the function sections (Functions, Sinks, Sources). | grafana_url | | If `render_monitoring_tab` is enabled, URL for Grafana. -| host_overrides.http | \http://localhost:8964 | URL to display in console to connect to Pulsar Beam HTTP proxy. -| host_overrides.pulsar | \http://localhost:6650 | URL to display in console to connect to Pulsar. +| host_overrides.http | \http://localhost:8964 | URL to display in console to connect to {pulsar-short} Beam HTTP proxy. +| host_overrides.pulsar | \http://localhost:6650 | URL to display in console to connect to {pulsar-short}. | host_overrides.ws | //localhost:8080 | URL to display in console to connect to WebSocket proxy. | notice_text | | Custom notice to appear at top of console. | oauth_client_id || This is the client ID that the console will use when authenticating with authentication provider. -| polling_interval | 10000 | How often the console polls Pulsar for updated values. In milliseconds. +| polling_interval | 10000 | How often the console polls {pulsar-short} for updated values. In milliseconds. | render_monitoring_tab | false | Enable tab that includes links to Grafana dashboards. -| server_config.admin_token | | When using `user` or `k8s` auth mode, a Pulsar token is used to connect to the Pulsar cluster. This specifies the token as a string. For full access, a superuser token is recommended. The `token_path` setting will override this value if present. +| server_config.admin_token | | When using `user` or `k8s` auth mode, a {pulsar-short} token is used to connect to the {pulsar-short} cluster. This specifies the token as a string. For full access, a superuser token is recommended. The `token_path` setting will override this value if present. | server_config.log_level | info | Log level for the console server. | server_config.port | 6454 | The listen port for the console server. -| server_config.pulsar_url | \http://localhost:8080 | URL for connecting to the Pulsar cluster. Should point to either a broker or Pulsar proxy. The console server must be able to reach this URL. +| server_config.pulsar_url | \http://localhost:8080 | URL for connecting to the {pulsar-short} cluster. Should point to either a broker or {pulsar-short} proxy. The console server must be able to reach this URL. | server_config.ssl.ca_path | | Path to the CA certificate. To enable HTTPS, `ca_path`, `cert_path`, and `key_path` must all be set. | server_config.ssl.cert_path | | Path to the server certificate. To enable HTTPS, `ca_path`, `cert_path`, and `key_path` must all be set. | server_config.ssl.hostname_validation | | Verify hostname matches the TLS certificate. @@ -105,12 +105,12 @@ These values can be modified in the JSON configuration file. | server_config.kubernetes.k8s_namespace | pulsar | When using `k8s` auth_mode, Kubernetes namespace that contains the username/password secrets. | server_config.kubernetes.service_host| | When using `k8s` auth_mode, specify a custom Kubernetes host name. | server_config.kubernetes.service_port | | When using `k8s` auth_mode, specify a custom Kubernetes port. -| server_config.token_path | | When using `user` or `k8s` auth mode, a Pulsar token is used to connect to the Pulsar cluster. This specifies the path to a file that contains the token to use. For full access, a superuser token is recommended. Alternatively, use `admin_token`. +| server_config.token_path | | When using `user` or `k8s` auth mode, a {pulsar-short} token is used to connect to the {pulsar-short} cluster. This specifies the path to a file that contains the token to use. For full access, a superuser token is recommended. Alternatively, use `admin_token`. | server_config.token_secret| | Secret used when signing access token for logging into the console. If not specified, a default secret is used. | server_config.user_auth.username | | When using `user` auth_mode, the login user name. | server_config.user_auth.password | | When using `user` auth_mode, the login password. -| server_config.websocket_url | https://websocket.example.com:8500 | URL for WebSocket proxy. Used by Test Clients to connect to Pulsar. The console server must be able to reach this URL. -| tenant | public | The default Pulsar tenant to view when starting the console. +| server_config.websocket_url | https://websocket.example.com:8500 | URL for WebSocket proxy. Used by Test Clients to connect to {pulsar-short}. The console server must be able to reach this URL. +| tenant | public | The default {pulsar-short} tenant to view when starting the console. |=== [#auth-modes] @@ -120,12 +120,12 @@ The `auth_mode` setting has four available configurations. === "auth_mode": "none" -No login screen is presented. Authentication must be disabled in Pulsar because the Admin Console will not attempt to authenticate. +No login screen is presented. Authentication must be disabled in {pulsar-short} because the Admin Console will not attempt to authenticate. === "auth_mode": "user" The Admin Console is protected by a login screen. Credentials are configured using the `username` and `password` settings in the `/config/default.json` file. -Once authenticated with these credentials, the token for connecting to Pulsar is retrieved from the server (configured using `token_path` or `admin_token`) and used to authenticate with the Pulsar cluster. +Once authenticated with these credentials, the token for connecting to {pulsar-short} is retrieved from the server (configured using `token_path` or `admin_token`) and used to authenticate with the {pulsar-short} cluster. === "auth_mode": "k8" @@ -140,20 +140,20 @@ The password must be stored in the secret with a key of `password` and a value o Multiple secrets with the prefix can be configured to set up multiple users for the Admin Console. A password can be reset by patching the corresponding Kubernetes secret. -Once the user is authenticated using one of the Kubernetes secrets, the token for connecting to Pulsar is retrieved from the server (configured using `token_path` or `admin_token`) and used to authenticate with the Pulsar cluster. +Once the user is authenticated using one of the Kubernetes secrets, the token for connecting to {pulsar-short} is retrieved from the server (configured using `token_path` or `admin_token`) and used to authenticate with the {pulsar-short} cluster. === "auth_mode": "openidconnect" In this auth mode, the dashboard will use your login credentials to retrieve a JWT from an authentication provider. -In the *DataStax Pulsar Helm Chart*, this is implemented by integrating the Pulsar Admin Console with Keycloak. Upon successful retrieval of the JWT, the Admin Console will use the retrieved JWT as the bearer token when making calls to Pulsar. +In the *{company} {pulsar-short} Helm Chart*, this is implemented by integrating the {pulsar-short} Admin Console with Keycloak. Upon successful retrieval of the JWT, the Admin Console will use the retrieved JWT as the bearer token when making calls to {pulsar-short}. -In addition to configuring the `auth_mode`, you must also configure the `oauth_client_id` (see <>). This is the client id that the Console will use when authenticating with Keycloak. Note that in Keycloak, it is important that this client exists and that it has the sub claim properly mapped to your desired Pulsar subject. Otherwise, the JWT won't work as desired. +In addition to configuring the `auth_mode`, you must also configure the `oauth_client_id` (see <>). This is the client id that the Console will use when authenticating with Keycloak. Note that in Keycloak, it is important that this client exists and that it has the sub claim properly mapped to your desired {pulsar-short} subject. Otherwise, the JWT won't work as desired. ==== Connecting to an OpenID Connect Auth/Identity Provider When opening the Admin Console, the first page is the login page. When using the `openidconnect` auth mode, the auth call needs to go to the Provider's server. -In the current design, nginx must be configured to route the call to the provider. The *DataStax Pulsar Helm Chart* does this automatically. +In the current design, nginx must be configured to route the call to the provider. The *{company} {pulsar-short} Helm Chart* does this automatically. == Next steps diff --git a/modules/components/pages/heartbeat-vm.adoc b/modules/components/pages/heartbeat-vm.adoc index addb779d..f135bf9c 100644 --- a/modules/components/pages/heartbeat-vm.adoc +++ b/modules/components/pages/heartbeat-vm.adoc @@ -1,6 +1,6 @@ = Heartbeat on VM/Server -This document describes how to install Pulsar Heartbeat on a virtual machine (VM) or server. For installation with the Docker image, see xref:install-upgrade:quickstart-helm-installs.adoc[Helm Chart Installation]. +This document describes how to install {pulsar-short} Heartbeat on a virtual machine (VM) or server. For installation with the Docker image, see xref:install-upgrade:quickstart-helm-installs.adoc[Helm Chart Installation]. == Install Heartbeat Binary @@ -22,7 +22,7 @@ $ ls ~/Downloads/pulsar-heartbeat-{heartbeat-version}-linux-amd64 == Execute Heartbeat binary -The Pulsar Heartbeat configuration is defined by a `.yaml` file. A yaml template for Heartbeat is available at https://github.com/datastax/pulsar-heartbeat/blob/master/config/runtime-template.yml[]. In this file, the environmental variable `PULSAR_OPS_MONITOR_CFG` tells the application where to source the file. +The {pulsar-short} Heartbeat configuration is defined by a `.yaml` file. A yaml template for Heartbeat is available at https://github.com/datastax/pulsar-heartbeat/blob/master/config/runtime-template.yml[]. In this file, the environmental variable `PULSAR_OPS_MONITOR_CFG` tells the application where to source the file. Run the binary file `pulsar-heartbeat---`. diff --git a/modules/components/pages/pulsar-beam.adoc b/modules/components/pages/pulsar-beam.adoc index 3939f49d..69e65eeb 100644 --- a/modules/components/pages/pulsar-beam.adoc +++ b/modules/components/pages/pulsar-beam.adoc @@ -1,13 +1,13 @@ -= Pulsar Beam with Luna Streaming -:navtitle: Pulsar Beam -:description: Install a minimal Luna Streaming helm chart that includes Pulsar Beam += {pulsar-short} Beam with Luna Streaming +:navtitle: {pulsar-short} Beam +:description: Install a minimal Luna Streaming Helm chart that includes {pulsar-short} Beam :helmValuesPath: https://raw.githubusercontent.com/datastaxdevs/luna-streaming-examples/main/beam/values.yaml -The https://github.com/kafkaesque-io/pulsar-beam[Pulsar Beam] project is an HTTP-based streaming and queueing system for use with Apache Pulsar. +The https://github.com/kafkaesque-io/pulsar-beam[{pulsar-short} Beam] project is an HTTP-based streaming and queueing system for use with {pulsar}. -With Pulsar Beam, you can send messages over HTTP, push messages to a webhook or cloud function, chain webhooks and functions together, or stream messages through server-sent events (SSE). +With {pulsar-short} Beam, you can send messages over HTTP, push messages to a webhook or cloud function, chain webhooks and functions together, or stream messages through server-sent events (SSE). -In this guide, you'll install a minimal DataStax Pulsar Helm chart that includes Pulsar Beam. +In this guide, you'll install a minimal {company} {pulsar-short} Helm chart that includes {pulsar-short} Beam. == Prerequisites @@ -28,7 +28,7 @@ In a separate terminal window, port forward the Beam endpoint service: kubectl port-forward -n datastax-pulsar service/pulsar-proxy 8085:8085 ---- -The forwarding service will map the URL:PORT https://127.0.0.1:8085 to Pulsar Proxy running in the new cluster. +The forwarding service will map the URL:PORT https://127.0.0.1:8085 to {pulsar-short} Proxy running in the new cluster. Because Beam was enabled, the Proxy knows to forward on to the Beam service. [source,shell] @@ -73,7 +73,7 @@ id: {9 0 0 0 0xc002287ad0} data: Hi there ---- -You have now completed the basics of using Beam in a Pulsar Cluster. Refer to the project's https://github.com/kafkaesque-io/pulsar-beam/blob/master/README.md[readme] to see all the possibilities! +You have now completed the basics of using Beam in a {pulsar-short} Cluster. Refer to the project's https://github.com/kafkaesque-io/pulsar-beam/blob/master/README.md[readme] to see all the possibilities! == A Python producer and consumer @@ -153,6 +153,6 @@ include::ROOT:partial$cleanup-terminal-and-helm.adoc[] Here are links to resources and guides you might be interested in: -* https://github.com/kafkaesque-io/pulsar-beam[Learn more] about the Pulsar Beam project -* https://kafkaesque-io.github.io/pulsar-beam-swagger[Pulsar Beam API] +* https://github.com/kafkaesque-io/pulsar-beam[Learn more] about the {pulsar-short} Beam project +* https://kafkaesque-io.github.io/pulsar-beam-swagger[{pulsar-short} Beam API] * xref:pulsar-sql.adoc[] \ No newline at end of file diff --git a/modules/components/pages/pulsar-monitor.adoc b/modules/components/pages/pulsar-monitor.adoc index 50942ece..c85113a8 100644 --- a/modules/components/pages/pulsar-monitor.adoc +++ b/modules/components/pages/pulsar-monitor.adoc @@ -1,26 +1,26 @@ -= Pulsar Heartbeat += {pulsar-short} Heartbeat -Pulsar Heartbeat monitors the availability, tracks the performance, and reports failures of the Pulsar cluster. +{pulsar-short} Heartbeat monitors the availability, tracks the performance, and reports failures of the {pulsar-short} cluster. It produces synthetic workloads to measure end-to-end message pubsub latency. -Pulsar Heartbeat is a cloud native application that can be installed by Helm within a Pulsar Kubernetes cluster. It can also monitor multiple Pulsar clusters. +{pulsar-short} Heartbeat is a cloud native application that can be installed by Helm within a {pulsar-short} Kubernetes cluster. It can also monitor multiple {pulsar-short} clusters. -TIP: Pulsar Heartbeat is installed automatically for server/VM installations as described in xref:install-upgrade:quickstart-server-installs.adoc[]. +TIP: {pulsar-short} Heartbeat is installed automatically for server/VM installations as described in xref:install-upgrade:quickstart-server-installs.adoc[]. -Pulsar Heartbeat supports the following features: +{pulsar-short} Heartbeat supports the following features: * Monitor message pubsub and admin REST API endpoint * Measure end-to-end message latency from producing to consuming messages -* Measure message latency over the websocket interface, and Pulsar function -* Monitor instance availability of broker, proxy, bookkeeper, and zookeeper in a Pulsar Kubernetes cluster -* Monitor individual Pulsar broker's health +* Measure message latency over the websocket interface, and {pulsar-short} function +* Monitor instance availability of broker, proxy, bookkeeper, and zookeeper in a {pulsar-short} Kubernetes cluster +* Monitor individual {pulsar-short} broker's health * Incident alert integration with OpsGenie * Customer configurable alert thresholds and probe test intervals * Slack alerts == Configuration -Pulsar Heartbeat is a data driven tool that sources configuration from a yaml or json file. The configuration json file can be specified in the following order of precedence: +{pulsar-short} Heartbeat is a data driven tool that sources configuration from a yaml or json file. The configuration json file can be specified in the following order of precedence: * An environment variable `PULSAR_OPS_MONITOR_CFG` * A command line argument `./pulsar-heartbeat -config /path/to/runtime.yml` @@ -30,7 +30,7 @@ You can download a template https://github.com/datastax/pulsar-heartbeat/blob/ma == Observability -Pulsar Heartbeat exposes Prometheus compliant metrics at the `\metrics` endpoint for scraping. The exported metrics are: +{pulsar-short} Heartbeat exposes Prometheus compliant metrics at the `\metrics` endpoint for scraping. The exported metrics are: [cols="<,^,<"] |=== @@ -75,17 +75,17 @@ Pulsar Heartbeat exposes Prometheus compliant metrics at the `\metrics` endpoint == In-cluster monitoring -Pulsar Heartbeat can be deployed within the same Pulsar Kubernetes cluster. +{pulsar-short} Heartbeat can be deployed within the same {pulsar-short} Kubernetes cluster. NOTE: Kubernetes' pod and service, and individual broker monitoring are only supported within the same Kubernetes cluster deployment. == Docker -Pulsar Heartbeat's official docker image can be pulled https://hub.docker.com/repository/docker/datastax/pulsar-heartbeat/tags?page=1&ordering=last_updated[here] +{pulsar-short} Heartbeat's official docker image can be pulled https://hub.docker.com/repository/docker/datastax/pulsar-heartbeat/tags?page=1&ordering=last_updated[here] === Docker compose -`./config/runtime.yml` or `./config/runtime.json` must have a Pulsar jwt and must be configured properly. +`./config/runtime.yml` or `./config/runtime.json` must have a {pulsar-short} jwt and must be configured properly. [source,bash] ---- @@ -96,7 +96,7 @@ $ docker-compose up The runtime.yml/yaml or runtime.json file must be mounted to /config/runtime.yml as the default configuration path. -Run docker container with Pulsar CA certificate if TLS is enabled and expose Prometheus metrics for collection. +Run docker container with {pulsar-short} CA certificate if TLS is enabled and expose Prometheus metrics for collection. [source,bash] ---- diff --git a/modules/components/pages/pulsar-sql.adoc b/modules/components/pages/pulsar-sql.adoc index b70225ee..8797c8aa 100644 --- a/modules/components/pages/pulsar-sql.adoc +++ b/modules/components/pages/pulsar-sql.adoc @@ -1,22 +1,22 @@ -= Using Pulsar SQL with Luna Streaming -:navtitle: Pulsar SQL -:description: This guide installs the luna streaming helm chart using minimum values for a working Pulsar cluster that includes SQL workers += Using {pulsar-short} SQL with Luna Streaming +:navtitle: {pulsar-short} SQL +:description: This guide installs the Luna Streaming Helm chart using minimum values for a working {pulsar-short} cluster that includes SQL workers :helmValuesPath: https://raw.githubusercontent.com/datastaxdevs/luna-streaming-examples/main/pulsar-sql/values.yaml -Pulsar SQL allows enterprises to query Apache Pulsar topic data with SQL. +{pulsar-short} SQL allows enterprises to query {pulsar} topic data with SQL. This is a powerful feature for an Enterprise, and SQL is a language they're likely familiar with. Stream processing, real-time analytics, and highly customized dashboards are just a few of the possibilities. -Pulsar offers a pre-made plugin for Trino that is included in its distribution. -Additionally, Pulsar has built-in options to create Trino workers and automatically configure the communications between Pulsar's ledger and Trino. +{pulsar-short} offers a pre-made plugin for Trino that is included in its distribution. +Additionally, {pulsar-short} has built-in options to create Trino workers and automatically configure the communications between {pulsar-short}'s ledger and Trino. -In this guide, we will use the DataStax Pulsar Helm Chart to install a Pulsar cluster with Pulsar SQL. +In this guide, we will use the {company} {pulsar-short} Helm Chart to install a {pulsar-short} cluster with {pulsar-short} SQL. The Trino coordinator and desired number of workers will be created directly in the cluster. == Prerequisites You will need the following prerequisites in place to complete this guide: -* Pulsar CLI +* {pulsar-short} CLI * https://prestodb.io/docs/current/installation/cli.html[Presto CLI] (this example version 0.278.1) * https://helm.sh/docs/intro/install/[Helm 3 CLI] (this example uses version 3.8.0) * https://kubernetes.io/docs/tasks/tools/[Kubectl CLI] (this example uses version 1.23.4) @@ -24,7 +24,7 @@ You will need the following prerequisites in place to complete this guide: [IMPORTANT] ==== -PrestoDB has been replaced by Trino, but Apache Pulsar is using Presto's version. +PrestoDB has been replaced by Trino, but {pulsar} is using Presto's version. The Trino CLI uses the "X-TRINO-USER" header for authentications but Presto expects "X-PRESTO-USER", which is why we use the Presto CLI. ==== @@ -36,7 +36,7 @@ include::ROOT:partial$install-helm.adoc[] You'll need to interact with services in the K8s cluster. Map a few ports to those services. -There's no need to forward Pulsar's messaging service ports. +There's no need to forward {pulsar-short}'s messaging service ports. In a new terminal window, port forward the Presto SQL service: @@ -45,7 +45,12 @@ In a new terminal window, port forward the Presto SQL service: kubectl port-forward -n datastax-pulsar service/pulsar-sql 8090:8090 ---- -include::ROOT:partial$port-forward-web-service.adoc[] +In a separate terminal, port forward {pulsar-short}'s admin service: + +[source,shell] +---- +kubectl port-forward -n datastax-pulsar service/pulsar-broker 8080:8080 +---- == Confirm Presto is available @@ -64,11 +69,11 @@ image::presto-sql-dashboard.png[Presto SQL dashboard] == Fill a topic with the data-generator source In this example, we will use the "data-generator" source connector to create a topic and add sample data simultaneously. -The minimalist Helm chart values use the https://github.com/datastax/release-notes/blob/master/Luna_Streaming_2.10_Release_Notes.md#lunastreaming-all-distribution[datastax/lunastreaming-all] image, which includes all supported Pulsar connectors. +The minimalist Helm chart values use the https://github.com/datastax/release-notes/blob/master/Luna_Streaming_2.10_Release_Notes.md#lunastreaming-all-distribution[datastax/lunastreaming-all] image, which includes all supported {pulsar-short} connectors. This example uses the "public" tenant and "default" namespace. -These are created by default in Pulsar, but you can use whatever tenant and namespace you are comfortable with. +These are created by default in {pulsar-short}, but you can use whatever tenant and namespace you are comfortable with. -. Download the minimalist Pulsar client. +. Download the minimalist {pulsar-short} client. This "client.conf" assumes the port forwarding addresses we will perform in the next step. + [source,shell] @@ -84,8 +89,8 @@ wget https://raw.githubusercontent.com/datastaxdevs/luna-streaming-examples/main export PULSAR_CLIENT_CONF= ---- -. Navigate to the Pulsar home folder and run the following command. -The CLI will use the environment variable's value as configuration for interacting with the Pulsar cluster. +. Navigate to the {pulsar-short} home folder and run the following command. +The CLI will use the environment variable's value as configuration for interacting with the {pulsar-short} cluster. + [source,shell] ---- @@ -131,7 +136,7 @@ The user can match the name you used to login earlier in this guide, but doesn't presto> show catalogs; ---- + -Notice the similarities between your Pulsar tenant/namespaces and Presto's output: +Notice the similarities between your {pulsar-short} tenant/namespaces and Presto's output: + .Result [source,shell] @@ -171,7 +176,7 @@ Query 20230103_163355_00001_zvk84, FINISHED, 2 nodes presto> select * from pulsar."public/default".mytopic limit 10; ---- + -The output should be the 10 messages that were added to the Pulsar topic previously. +The output should be the 10 messages that were added to the {pulsar-short} topic previously. + If you prefer, you can query your table with the Presto client REST API. The response will include a `nextUri` value. @@ -193,7 +198,7 @@ select * from pulsar."public/default".mytopic limit 10 presto> exit ---- -You have successfully interacted with a Pulsar Cluster via SQL. +You have successfully interacted with a {pulsar-short} Cluster via SQL. Want to put your new learnings to the test? Try using the Presto plugin in https://redash.io/data-sources/presto[Redash] or https://superset.apache.org/docs/databases/presto/[Superset] to create useful dashboards. @@ -201,11 +206,11 @@ Want to put your new learnings to the test? Try using the Presto plugin in https === Why are there quotes around the schema name? You might wonder why there are quotes ("") around the schema name. -This is a result of mapping Presto primitives to Pulsar's primitives. +This is a result of mapping Presto primitives to {pulsar-short}'s primitives. Presto has catalogs, schemas, and tables. -Pulsar has tenants, namespaces, and topics. -The Pulsar Presto plugin assumes the catalog name which leaves schema and table, so the tenant and namespace are combined with a forward slash delimited string. Presto has to see that combination as a single string, which means it needs to be wrapped in quotes. +{pulsar-short} has tenants, namespaces, and topics. +The {pulsar-short} Presto plugin assumes the catalog name which leaves schema and table, so the tenant and namespace are combined with a forward slash delimited string. Presto has to see that combination as a single string, which means it needs to be wrapped in quotes. == Connect with JDBC diff --git a/modules/components/pages/starlight-for-kafka.adoc b/modules/components/pages/starlight-for-kafka.adoc deleted file mode 100644 index ed72e74b..00000000 --- a/modules/components/pages/starlight-for-kafka.adoc +++ /dev/null @@ -1,126 +0,0 @@ -= Using Starlight for Kafka with Luna Streaming -:navtitle: Starlight for Kafka -:description: This guide will take you step-by-step through deploying DataStax Luna Streaming helm chart with the Starlight for Kafka protocol handler extension -:helmValuesPath: https://raw.githubusercontent.com/datastaxdevs/luna-streaming-examples/main/starlight-for-kafka/values.yaml - -Starlight for Kafka brings the native Apache Kafka protocol support to Apache Pulsar by introducing a Kafka protocol handler on Pulsar brokers. -By adding the Starlight for Kafka protocol handler to your Pulsar cluster, you can migrate your existing Kafka applications and services to Pulsar without modifying the code. - -== Prerequisites - -* https://helm.sh/docs/intro/install/[Helm 3 CLI] (we used version 3.8.0) -* https://www.apache.org/dyn/closer.cgi?path=/kafka/3.3.1/kafka_2.13-3.3.1.tgz[Kafka CLI] (we used version 3.3.1) -* https://kubernetes.io/docs/tasks/tools/[Kubectl CLI] (we used version 1.23.4) -* Enough access to a K8s cluster to create a namespace, deployments, and pods - -== Install Luna Streaming helm chart - -include::ROOT:partial$install-helm.adoc[] - -== Forward service port - -You'll need to interact with a few of the services in the K8s cluster. -Map a few ports to those services. - -include::ROOT:partial$port-forward-web-service.adoc[] - -In a separate terminal window, port forward the Starlight for Kafka service: - -[source,shell] ----- -kubectl port-forward -n datastax-pulsar service/pulsar-proxy 9092:9092 ----- - -== Have a look around - -The Luna Streaming Helm Chart automatically creates a tenant named "public" and a namespace within that tenant named "default". - -The Starlight for Kafka extension creates a few namespaces and topics to function correctly. - -List the namespaces in the "public" tenant to see what was created. - -[source,shell] ----- -~/apache-pulsar-2.10.1$ ./bin/pulsar-admin namespaces list public ----- - -The output should be similar to the following. - -[source,shell] ----- -public/__kafka -public/__kafka_producerid -public/default ----- - -Notice the namespaces prefixed with "__kafka". -These are used by the service for different functions. -To learn more about Starlight for Kafka operations, see the S4K xref:starlight-for-kafka:ROOT:index.adoc[documentation]. - -== Produce a message with the Kafka CLI - -If you hadn't noticed, we never opened the Pulsar binary port to accept new messages. -Only the admin port and the Kafka port are open. -To further show how native Starlight for Kafka is to Pulsar, we will use the Kafka CLI to produce and consume messages from Pulsar. - -From within the Kafka directory, run the following command to start the shell. - -[source,shell] ----- -~/kafka_2.13-3.3.1$ ./bin/kafka-console-producer.sh --topic quickstart --bootstrap-server localhost:9092 ----- - -Type a message, press Enter to send it, then Ctrl-C to exit the producer shell. - -[source,shell] ----- -This my first message ----- - -Wait a second! We never created a topic! And where did the "quickstart" topic come from?! - -The default behavior of Starlight for Kafka is to create a new single partition, persistent topic when one is not present. -You can configure this behavior and many other S4K parameters in the https://github.com/datastaxdevs/luna-streaming-examples/blob/main/starlight-for-kafka/values.yaml[Helm chart]. -Learn more about the configuration values xref:starlight-for-kafka:configuration:starlight-kafka-configuration.adoc[here]. - -Let's have a look at the topic that was created. From your Pulsar home folder, run the following command. - -[source,shell] ----- -~/apache-pulsar-2.10.1$ ./bin/pulsar-admin topics list public/default ----- - -The output will include the newly created topic. - -[source,shell] ----- -persistent://public/default/quickstart-partition-0 ----- - -== Consume the new message with the Kafka CLI - -Let's use the Kafka CLI to consume the message we just produced. Start the consumer shell from the Kafka home folder with the following command. - -[source,shell] ----- -~/kafka_2.13-3.3.1$ ./bin/kafka-console-consumer.sh --topic quickstart --from-beginning --bootstrap-server localhost:9092 ----- - -The data of our new message will be output. Enter Ctrl-C to exit the shell. - -[source,shell] ----- -This my first message ----- - -== Next steps - -Kafka users and existing applications using Kafka can enjoy the many benefits of a Pulsar cluster, while never having to change tooling or libraries. -Other folks that are more comfortable with Pulsar tooling and clients can also interact with the same topics. Together, new and legacy applications work together to create modern solutions. - -Here are links to other guides and resource you might be interested in. - -* xref:streaming-learning:use-cases-architectures:starlight/kafka/index.adoc[Messaging with Starlight for Kafka] -* xref:pulsar-beam.adoc[] -* xref:pulsar-sql.adoc[] -* xref:heartbeat-vm.adoc[] \ No newline at end of file diff --git a/modules/components/pages/starlight-for-rabbitmq.adoc b/modules/components/pages/starlight-for-rabbitmq.adoc deleted file mode 100644 index 950d5f43..00000000 --- a/modules/components/pages/starlight-for-rabbitmq.adoc +++ /dev/null @@ -1,95 +0,0 @@ -= Using Starlight for RabbitMQ with Luna Streaming -:navtitle: Starlight for RabbitMQ -:description: This guide will take you step-by-step through deploying DataStax Luna Streaming helm chart with the Starlight for RabbitMQ protocol handler extension -:helmValuesPath: https://raw.githubusercontent.com/datastaxdevs/luna-streaming-examples/main/starlight-for-rabbitmq/values.yaml - -Starlight for RabbitMQ brings native https://www.rabbitmq.com/[RabbitMQ] protocol support to https://pulsar.apache.org/[Apache Pulsar(TM)] by introducing a RabbitMQ protocol handler on Pulsar brokers or Pulsar proxies. -By adding the Starlight for RabbitMQ protocol handler to your Pulsar cluster, you can migrate your existing RabbitMQ applications and services to Pulsar without modifying the code. - -== Prerequisites - -* https://helm.sh/docs/intro/install/[Helm 3 CLI] (we used version 3.8.0) -* https://kubernetes.io/docs/tasks/tools/[Kubectl CLI] (we used version 1.23.4) -* Python (we used version 3.8.10) -* Enough access to a K8s cluster to create a namespace, deployments, and pods - -== Install Luna Streaming helm chart - -include::ROOT:partial$install-helm.adoc[] - -== Forward service port - -You'll need to interact with a few of the services in the K8s cluster. -Map a few ports to those services. - -include::ROOT:partial$port-forward-web-service.adoc[] - -In a separate terminal window, port forward the Starlight for RabbitMQ service: - -[source,shell] ----- -kubectl port-forward -n datastax-pulsar service/pulsar-proxy 5672:5672 ----- - -== Produce a message with the RabbitMQ Python client - -If you hadn't noticed, we never opened the Pulsar binary port to accept new messages. -Only the admin port and the RabbitMQ port are open. -To further demonstrate how native Starlight for RabbitMQ is, we will use the Pika RabbitMQ Python library to produce and consume messages from Pulsar. - -Save the following Python script to a safe place as `test-queue.py`. -The script assumes you have opened the localhost:5672 port. - -[source,python] ----- -#!/usr/bin/env python -import pika - -connection = pika.BlockingConnection(pika.ConnectionParameters(port=5672)) -channel = connection.channel() - -try: - channel.queue_declare("test-queue") - print("created test-queue queue") - - channel.basic_publish(exchange="", routing_key="test-queue", body="test".encode('utf-8')) - print("published message test") - - _, _, res = channel.basic_get(queue="test-queue", auto_ack=True) - assert res is not None, "should have received a message" - print("received message: " + res.decode()) - - channel.queue_delete("test-queue") - print("deleted test-queue queue") - -finally: - connection.close() ----- - -Open a terminal and return to the safe place where you saved the Python script. -Run the following command to execute the Python program. - -[source,shell] ----- -python ./test-queue.py ----- - -The output should look like the following. - -[souce,shell] ----- -created test-queue queue -published message test -received message: test -deleted test-queue queue ----- - -== Next steps - -The Luna Helm chart deployed Starlight for RabbitMQ on the Pulsar proxy and opened the correct port. -Your application will now "talk" to Pulsar as if it were a real RabbitMQ host. - -* xref:streaming-learning:use-cases-architectures:starlight/rabbitmq/index.adoc[Messaging with Starlight for RabbitMQ] -* xref:pulsar-beam.adoc[] -* xref:pulsar-sql.adoc[] -* xref:heartbeat-vm.adoc[] \ No newline at end of file diff --git a/modules/components/pages/starlight.adoc b/modules/components/pages/starlight.adoc new file mode 100644 index 00000000..2d16f8c0 --- /dev/null +++ b/modules/components/pages/starlight.adoc @@ -0,0 +1,34 @@ += {company} Starlight suite of {pulsar-reg} extensions +:navtitle: Starlight + +The Starlight suite of extensions is a collection of {pulsar-reg} protocol handlers that extend an existing {pulsar-short} cluster. +The goal of these extensions is to create a native, seamless interaction with a {pulsar-short} cluster using existing tooling and clients. + +Each extension integrates two popular event streaming ecosystems, unlocking new use cases and reducing barriers for users adopting {pulsar-short}. +Leverage advantages from each ecosystem to build a truly unified event streaming platform, accelerating the development of real-time applications and services. + +The Starlight extensions are open source and included in https://www.ibm.com/docs/en/supportforpulsar[IBM Elite Support for {pulsar}]. + +== {starlight-kafka} + +The https://github.com/datastax/starlight-for-kafka[{starlight-kafka} extension] brings native Apache Kafka(R) protocol support to {pulsar} by introducing a Kafka protocol handler on {pulsar-short} brokers. + +For more information, see the xref:starlight-for-kafka:ROOT:index.adoc[{starlight-kafka} documentation]. + +== {starlight-rabbitmq} + +The https://github.com/datastax/starlight-for-rabbitmq[{starlight-rabbitmq} extension] brings native RabbitMQ(R) protocol support to {pulsar-reg}. + +For more information, see the xref:starlight-for-rabbitmq:ROOT:index.adoc[{starlight-rabbitmq} documentation]. + +== Starlight for JMS + +The https://github.com/datastax/pulsar-jms[Starlight for JMS extension] allows enterprises to take advantage of the scalability and resiliency of a modern streaming platform to run their existing JMS applications. + +For more information, see the xref:starlight-for-jms:ROOT:index.adoc[Starlight for JMS documentation]. + +== See also + +* xref:components:pulsar-beam.adoc[] +* xref:components:pulsar-sql.adoc[] +* xref:components:heartbeat-vm.adoc[] \ No newline at end of file diff --git a/modules/install-upgrade/pages/quickstart-helm-installs.adoc b/modules/install-upgrade/pages/quickstart-helm-installs.adoc index 63040869..734ecdc3 100644 --- a/modules/install-upgrade/pages/quickstart-helm-installs.adoc +++ b/modules/install-upgrade/pages/quickstart-helm-installs.adoc @@ -1,12 +1,12 @@ = Quick Start for Helm Chart installs -You have options for installing *DataStax Luna Streaming*: +You have options for installing *{company} Luna Streaming*: -* With the provided *DataStax Helm chart* for an existing Kubernetes environment locally or with a cloud provider, as covered in this topic. -* With the *DataStax Luna Streaming tarball* for deployment to a single server/VM, or to multiple servers/VMs. See xref:install-upgrade:quickstart-server-installs.adoc[Quick Start for Server/VM installs]. -* With the *DataStax Ansible scripts* provided at https://github.com/datastax/pulsar-ansible[https://github.com/datastax/pulsar-ansible]. +* With the provided *{company} Helm chart* for an existing Kubernetes environment locally or with a cloud provider, as covered in this topic. +* With the *{company} Luna Streaming tarball* for deployment to a single server/VM, or to multiple servers/VMs. See xref:install-upgrade:quickstart-server-installs.adoc[Quick Start for Server/VM installs]. +* With the *{company} Ansible scripts* provided at https://github.com/datastax/pulsar-ansible[https://github.com/datastax/pulsar-ansible]. -The Helm chart and options described below configure an Apache Pulsar cluster. +The Helm chart and options described below configure an {pulsar} cluster. It is designed for production use, but can also be used in local development environments with the proper settings. The resulting configuration includes support for: @@ -15,21 +15,21 @@ The resulting configuration includes support for: * xref:install-upgrade:quickstart-helm-installs.adoc#authentication[Authentication] * WebSocket Proxy * Standalone Functions Workers -* Pulsar IO Connectors +* {pulsar-short} IO Connectors * xref:install-upgrade:quickstart-helm-installs.adoc#_tiered_storage_configuration[Tiered Storage] including Tardigarde distributed cloud storage -* xref:install-upgrade:quickstart-helm-installs.adoc#_pulsar_sql_configuration[Pulsar SQL Workers] -* Pulsar Admin Console for managing the cluster -* Pulsar heartbeat +* xref:install-upgrade:quickstart-helm-installs.adoc#_pulsar_sql_configuration[{pulsar-short} SQL Workers] +* {pulsar-short} Admin Console for managing the cluster +* {pulsar-short} heartbeat * Burnell for API-based token generation -* Prometheus, Grafana, and Alertmanager stack with default Grafana dashboards and Pulsar-specific alerting rules +* Prometheus, Grafana, and Alertmanager stack with default Grafana dashboards and {pulsar-short}-specific alerting rules * cert-manager with support for self-signed certificates as well as public certificates using ACME; such as Let's Encrypt -* Ingress for all HTTP ports (Pulsar Admin Console, Prometheus, Grafana, others) +* Ingress for all HTTP ports ({pulsar-short} Admin Console, Prometheus, Grafana, others) == Prerequisites -For an example set of production cluster values, see the DataStax production-ready https://github.com/datastax/pulsar-helm-chart[Helm chart]. +For an example set of production cluster values, see the {company} production-ready https://github.com/datastax/pulsar-helm-chart[Helm chart]. -DataStax recommends these hardware resources for running Luna Streaming in a Kubernetes environment: +{company} recommends these hardware resources for running Luna Streaming in a Kubernetes environment: * Helm version 3 @@ -131,7 +131,7 @@ AKS:: -- ==== -* Create a custom storage configuration as a `yaml` file (https://github.com/datastax/pulsar-helm-chart/blob/master/helm-chart-sources/pulsar/templates/bookkeeper/bookkeeper-storageclass.yaml[like the DataStax example]) and tell the Helm chart to use that storage configuration when it creates the BookKeeper PVCs. +* Create a custom storage configuration as a `yaml` file (https://github.com/datastax/pulsar-helm-chart/blob/master/helm-chart-sources/pulsar/templates/bookkeeper/bookkeeper-storageclass.yaml[like the {company} example]) and tell the Helm chart to use that storage configuration when it creates the BookKeeper PVCs. + [source,yaml] ---- @@ -148,7 +148,7 @@ First, create the namespace; in this example, we use `pulsar`. `kubectl create namespace pulsar` -Then run this helm command: +Then run this `helm` command: `helm install pulsar datastax-pulsar/pulsar --namespace pulsar --values storage_values.yaml --create-namespace` @@ -156,15 +156,15 @@ TIP: To avoid having to specify the `pulsar` namespace on each subsequent comman `kubectl config set-context $(kubectl config current-context) --namespace=pulsar` -Once Pulsar is installed, you can now access your Luna Streaming cluster. +Once {pulsar-short} is installed, you can now access your Luna Streaming cluster. === Access the Luna Streaming cluster -The default values will create a ClusterIP for all components. ClusterIPs are only accessible within the Kubernetes cluster. The easiest way to work with Pulsar is to log into the bastion host (assuming it is in the `pulsar` namespace): +The default values will create a ClusterIP for all components. ClusterIPs are only accessible within the Kubernetes cluster. The easiest way to work with {pulsar-short} is to log into the bastion host (assuming it is in the `pulsar` namespace): `kubectl exec $(kubectl get pods -l component=bastion -o jsonpath="{.items[*].metadata.name}" -n pulsar) -it -n pulsar -- /bin/bash` -Once you are logged into the bastion, you can run Pulsar admin commands: +Once you are logged into the bastion, you can run {pulsar-short} admin commands: ---- bin/pulsar-admin tenants list @@ -190,7 +190,7 @@ If you are using a load balancer on the proxy, you can find the IP address using `kubectl get service -n pulsar` -=== Manage Luna Streaming with Pulsar Admin Console +=== Manage Luna Streaming with {pulsar-short} Admin Console Or if you would rather go directly to the broker: @@ -198,25 +198,25 @@ Or if you would rather go directly to the broker: `kubectl port-forward -n pulsar $(kubectl get pods -n pulsar -l component=broker -o jsonpath='{.items[0].metadata.name}') 6650:6650` -=== Manage Luna Streaming with Pulsar Admin Console +=== Manage Luna Streaming with {pulsar-short} Admin Console -The Pulsar Admin Console is installed in your cluster by enabling the console with this values setting: +The {pulsar-short} Admin Console is installed in your cluster by enabling the console with this values setting: ---- component: pulsarAdminConsole: yes ---- -The Pulsar Admin Console will be automatically configured to connect to the Pulsar cluster. +The {pulsar-short} Admin Console will be automatically configured to connect to the {pulsar-short} cluster. -By default, the Pulsar Admin Console has authentication disabled. You can enable authentication with these settings: +By default, the {pulsar-short} Admin Console has authentication disabled. You can enable authentication with these settings: ---- pulsarAdminConsole: authMode: k8s ---- -To learn more about using the Pulsar Admin Console, see xref:components:admin-console-tutorial.adoc[Admin Console Tutorial]. +To learn more about using the {pulsar-short} Admin Console, see xref:components:admin-console-tutorial.adoc[Admin Console Tutorial]. == Install Luna Streaming locally @@ -266,15 +266,15 @@ pulsar-zookeeper-0 1/1 Running 0 pulsar-zookeeper-metadata-5l58k 0/1 Completed 0 12m ---- -Once all the pods are running, you can access the Pulsar Admin Console by forwarding to localhost: +Once all the pods are running, you can access the {pulsar-short} Admin Console by forwarding to localhost: `kubectl port-forward $(kubectl get pods -l component=adminconsole -o jsonpath='{.items[0].metadata.name}') 8080:80` -Now open a browser to \http://localhost:8080. In the Pulsar Admin Console, you can test your Pulsar setup using the built-in clients (Test Clients in the left-hand menu). +Now open a browser to \http://localhost:8080. In the {pulsar-short} Admin Console, you can test your {pulsar-short} setup using the built-in clients (Test Clients in the left-hand menu). -=== Access the Pulsar cluster on localhost +=== Access the {pulsar-short} cluster on localhost -To port forward the proxy admin and Pulsar ports to your local machine: +To port forward the proxy admin and {pulsar-short} ports to your local machine: `kubectl port-forward -n pulsar $(kubectl get pods -n pulsar -l component=proxy -o jsonpath='{.items[0].metadata.name}') 8080:8080` @@ -288,13 +288,13 @@ Or if you would rather go directly to the broker: === Access Admin Console on your local machine -To access Pulsar Admin Console on your local machine, forward port 80: +To access {pulsar-short} Admin Console on your local machine, forward port 80: ---- kubectl port-forward -n pulsar $(kubectl get pods -n pulsar -l component=adminconsole -o jsonpath='{.items[0].metadata.name}') 8888:80 ---- -TIP: While using the Admin Console and Pulsar Monitoring, if the connection to `localhost:3000` is refused, set a port-forward to the Grafana pod. Example: +TIP: While using the Admin Console and {pulsar-short} Monitoring, if the connection to `localhost:3000` is refused, set a port-forward to the Grafana pod. Example: ---- kubectl port-forward -n pulsar $(kubectl get pods -n pulsar -l app.kubernetes.io/name=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 & ---- @@ -324,7 +324,7 @@ helm install pulsar -f dev-values-auth.yaml datastax-pulsar/pulsar You can enable a full Prometheus stack (Prometheus, Alertmanager, Grafana) from [kube-prometheus](https://github.com/prometheus-operator/kube-prometheus). This includes default Prometheus rules and Grafana dashboards for Kubernetes. -In an addition, this chart can deploy Grafana dashboards for Pulsar as well as Pulsar-specific rules for Prometheus. +In an addition, this chart can deploy Grafana dashboards for {pulsar-short} as well as {pulsar-short}-specific rules for Prometheus. To enable the Prometheus stack, use the following setting in your values file: @@ -354,9 +354,9 @@ Tiered storage (offload to blob storage) can be configured in the `storageOffloa In addition, you can configure any S3 compatible storage. There is explicit support for https://tardigrade.io[Tardigrade], which is a provider of secure, decentralized storage. You can enable the Tardigarde S3 gateway in the `extra` configuration. The instructions for configuring the gateway are provided in the `tardigrade` section of the `values.yaml` file. -=== Pulsar SQL Configuration +=== {pulsar-short} SQL Configuration -If you enable Pulsar SQL, the cluster provides https://prestodb.io/[Presto] access to the data stored in BookKeeper (and tiered storage, if enabled). Presto is exposed on the service named `-sql`. +If you enable {pulsar-short} SQL, the cluster provides https://prestodb.io/[Presto] access to the data stored in BookKeeper (and tiered storage, if enabled). Presto is exposed on the service named `-sql`. The easiest way to access the Presto command line is to log into the bastion host and then connect to the Presto service port, like this: @@ -378,9 +378,9 @@ Splits: 17 total, 17 done (100.00%) 0:04 [2 rows, 144B] [0 rows/s, 37B/s] --------------------------------------- -To access Pulsar SQL from outside the cluster, you can enable the `ingress` option which will expose the Presto port on hostname. We have tested with the Traefik ingress, but any Kubernetes ingress should work. You can then run SQL queries using the Presto CLI and monitoring Presto using the built-in UI (point browser to the ingress hostname). Authentication is not enabled on the UI, so you can log in with any username. +To access {pulsar-short} SQL from outside the cluster, you can enable the `ingress` option which will expose the Presto port on hostname. We have tested with the Traefik ingress, but any Kubernetes ingress should work. You can then run SQL queries using the Presto CLI and monitoring Presto using the built-in UI (point browser to the ingress hostname). Authentication is not enabled on the UI, so you can log in with any username. -It is recommended that you match the Presto CLI version to the version running as part of Pulsar SQL. +It is recommended that you match the Presto CLI version to the version running as part of {pulsar-short} SQL. The Presto CLI supports basic authentication, so if you enabled that on the Ingress (using annotations), you can have secure Presto access. Example: @@ -410,8 +410,8 @@ The Helm chart has the following optional dependencies: [#authentication] === Authentication -The chart can enable token-based authentication for your Pulsar cluster. For information on token-based -authentication in Pulsar, see https://pulsar.apache.org/docs/en/security-token-admin/[Pulsar token authentication admin documentation]. +The chart can enable token-based authentication for your {pulsar-short} cluster. For information on token-based +authentication in {pulsar-short}, see https://pulsar.apache.org/docs/en/security-token-admin/[{pulsar-short} token authentication admin documentation]. For authentication to work, the token-generation keys need to be stored in Kubernetes secrets along with some default tokens (for superuser access). @@ -453,7 +453,7 @@ You can create the certificate like this: `kubectl create secret tls --key --cert ` -The resulting secret will be of type `kubernetes.io/tls`. The key should not be in `PKCS 8` format even though that is the format used by Pulsar. The format will be converted by the chart to `PKCS 8`. +The resulting secret will be of type `kubernetes.io/tls`. The key should not be in `PKCS 8` format even though that is the format used by {pulsar-short}. The format will be converted by the chart to `PKCS 8`. You can also specify the certificate information directly in the values: @@ -468,7 +468,7 @@ This is useful if you are using a self-signed certificate. For automated handling of publicly signed certificates, you can use a tool such as https://cert-mananager[cert-manager]. -For more information, see https://github.com/datastax/pulsar-helm-chart/blob/master/aws-customer-docs.md[Using Cert-Manager for Pulsar Certificates in AWS]. +For more information, see https://github.com/datastax/pulsar-helm-chart/blob/master/aws-customer-docs.md[Using Cert-Manager for {pulsar-short} Certificates in AWS]. Once you have created the secrets that store the certificate info (or specified it in the values), you can enable TLS in the values: @@ -477,7 +477,7 @@ Once you have created the secrets that store the certificate info (or specified [#video] == Getting started with Kubernetes video -Follow along with this video from our *Five Minutes About Pulsar* series to get started with a Helm installation. +Follow along with this video from our *Five Minutes About {pulsar-short}* series to get started with a Helm installation. video::hEBP_IVQqQM[youtube, list=PL2g2h-wyI4SqeKH16czlcQ5x4Q_z-X7_m] diff --git a/modules/install-upgrade/pages/quickstart-server-installs.adoc b/modules/install-upgrade/pages/quickstart-server-installs.adoc index a9030369..299d5300 100644 --- a/modules/install-upgrade/pages/quickstart-server-installs.adoc +++ b/modules/install-upgrade/pages/quickstart-server-installs.adoc @@ -1,15 +1,15 @@ = Quick Start for Bare Metal/VM installs -This document explains xref:install-upgrade:quickstart-server-installs.adoc#install[installation] of Luna Streaming for Bare Metal/VM deployments with a Pulsar tarball. +This document explains xref:install-upgrade:quickstart-server-installs.adoc#install[installation] of Luna Streaming for Bare Metal/VM deployments with a {pulsar-short} tarball. The resulting Luna Streaming deployment includes: * *Tiered Storage:* Offload historical messages to more cost effective object storages such as AWS S3, Azure Blob, Google Cloud Storage, and HDFS. * *Built-in Schema Registry:* Guarantee messaging type safety on a per-topic basis without relying on any external facility. -* *Pulsar I/O connectors:* Enables Pulsar to exchange data with external systems, either as sources or sinks. -* *Pulsar Function:* Lightweight compute extensions of Pulsar brokers which enable real-time simple event processing within Pulsar. -* *Pulsar SQL:* SQL-based interactive query for message data stored in Pulsar. -* *Pulsar Transactions:* enables event streaming applications to consume, process, and produce messages in one atomic operation. +* *{pulsar-short} I/O connectors:* Enables {pulsar-short} to exchange data with external systems, either as sources or sinks. +* *{pulsar-short} Function:* Lightweight compute extensions of {pulsar-short} brokers which enable real-time simple event processing within {pulsar-short}. +* *{pulsar-short} SQL:* SQL-based interactive query for message data stored in {pulsar-short}. +* *{pulsar-short} Transactions:* enables event streaming applications to consume, process, and produce messages in one atomic operation. == Requirements @@ -17,11 +17,11 @@ The resulting Luna Streaming deployment includes: * JDK 11 + -Pulsar can run with JDK8, but DataStax Luna Streaming is designed for Java 11. +{pulsar-short} can run with JDK8, but {company} Luna Streaming is designed for Java 11. * File System + -DataStax recommends XFS, but ext4 will work. +{company} recommends XFS, but ext4 will work. * For a single node install, a server with at least 8 CPU and 32 GB of memory is required. @@ -33,7 +33,7 @@ The servers must be on the same network so they can communicate with each other. * BookKeeper should use one volume device for the journal, and one volume device for the ledgers. The journal device should be 20GB. The ledger volume device should be sized to hold the expected amount of stored message data. -* DataStax recommends a separate data disk volume for ZooKeeper. +* {company} recommends a separate data disk volume for ZooKeeper. * Operating System Settings + @@ -43,7 +43,7 @@ Check this setting with `cat /sys/kernel/mm/transparent_hugepage/enabled` and `c [#install] == Installation -. Download the DataStax Luna Streaming tarball from the https://github.com/datastax/pulsar/releases[DataStax GitHub repo]. There are three versions of Luna Streaming currently available: +. Download the {company} Luna Streaming tarball from the https://github.com/datastax/pulsar/releases[{company} GitHub repo]. There are three versions of Luna Streaming currently available: + [cols="1,1"] [%autowidth] @@ -52,13 +52,13 @@ Check this setting with `cat /sys/kernel/mm/transparent_hugepage/enabled` and `c |*Included components* |`lunastreaming-core--bin.tar.gz` -|Contains the core Pulsar modules: Zookeeper, Broker, BookKeeper, and function worker +|Contains the core {pulsar-short} modules: Zookeeper, Broker, BookKeeper, and function worker |`lunastreaming--bin.tar.gz` -|Contains all components from `lunastreaming-core` as well as support for Pulsar SQL +|Contains all components from `lunastreaming-core` as well as support for {pulsar-short} SQL |`lunastreaming-all--bin.tar.gz` -|Contains all components from `lunastreaming` as well as the NAR files for all Pulsar I/O connectors and offloaders +|Contains all components from `lunastreaming` as well as the NAR files for all {pulsar-short} I/O connectors and offloaders |=== @@ -89,29 +89,29 @@ drwxr-xr-x@ 277 firstname.lastname staff 8864 May 17 05:58 lib drwxr-xr-x@ 25 firstname.lastname staff 800 Jan 22 2020 licenses ---- -You have successfully installed the DataStax Luna Streaming tarball. +You have successfully installed the {company} Luna Streaming tarball. == Additional tooling -Once the DataStax Luna Streaming tarball is installed, you may want to add additional tooling to your server/VM deployment. +Once the {company} Luna Streaming tarball is installed, you may want to add additional tooling to your server/VM deployment. -* *Pulsar Admin Console:* Web-based UI that administrates Pulsar. -Download the latest version from the https://github.com/datastax/pulsar-admin-console[DataStax GitHub repo] and follow the instructions xref:components:admin-console-vm.adoc[here]. +* *{pulsar-short} Admin Console:* Web-based UI that administrates {pulsar-short}. +Download the latest version from the https://github.com/datastax/pulsar-admin-console[{company} GitHub repo] and follow the instructions xref:components:admin-console-vm.adoc[here]. + [NOTE] ==== Admin Console requires https://nodejs.org/download/release/latest-v14.x/[NodeJS 14 LTS] and Nginx version 1.17.9+. ==== -* *Pulsar Heartbeat:* Monitors Pulsar cluster availability. -Download the latest version from the https://github.com/datastax/pulsar-heartbeat/releases/[DataStax GitHub repo] and follow the instructions xref:components:heartbeat-vm.adoc[here]. +* *{pulsar-short} Heartbeat:* Monitors {pulsar-short} cluster availability. +Download the latest version from the https://github.com/datastax/pulsar-heartbeat/releases/[{company} GitHub repo] and follow the instructions xref:components:heartbeat-vm.adoc[here]. == Next steps -* For initializing Pulsar components like BookKeeper and ZooKeeper, see the https://pulsar.apache.org/docs/deploy-bare-metal[Pulsar documentation]. +* For initializing {pulsar-short} components like BookKeeper and ZooKeeper, see the https://pulsar.apache.org/docs/deploy-bare-metal[{pulsar-short} documentation]. -* For installing optional built-in connectors or tiered storage included in `lunastreaming-all`, see the https://pulsar.apache.org/docs/deploy-bare-metal#install-builtin-connectors-optional[Pulsar documentation]. +* For installing optional built-in connectors or tiered storage included in `lunastreaming-all`, see the https://pulsar.apache.org/docs/deploy-bare-metal#install-builtin-connectors-optional[{pulsar-short} documentation]. * For installation to existing Kubernetes environments or with a cloud provider, see xref:install-upgrade:quickstart-helm-installs.adoc[Quick Start for Helm Chart installs]. -* For Ansible deployment, use the DataStax Ansible scripts provided at https://github.com/datastax/pulsar-ansible[https://github.com/datastax/pulsar-ansible]. \ No newline at end of file +* For Ansible deployment, use the {company} Ansible scripts provided at https://github.com/datastax/pulsar-ansible[https://github.com/datastax/pulsar-ansible]. \ No newline at end of file diff --git a/modules/operations/pages/auth.adoc b/modules/operations/pages/auth.adoc index 8c85269c..d81c6ddd 100644 --- a/modules/operations/pages/auth.adoc +++ b/modules/operations/pages/auth.adoc @@ -1,12 +1,12 @@ = Luna Streaming Authentication -The Helm chart can enable token-based authentication for your Pulsar cluster. For more, see https://pulsar.apache.org/docs/en/security-token-admin/[Pulsar token authentication]. +The Helm chart can enable token-based authentication for your {pulsar-short} cluster. For more, see https://pulsar.apache.org/docs/en/security-token-admin/[{pulsar-short} token authentication]. For authentication to work, the token-generation keys need to be stored in Kubernetes secrets along with superuser default tokens. The Helm chart includes tooling to automatically create the necessary secrets, or you can do this manually. -== Automatically generating secrets for Pulsar token authentication +== Automatically generating secrets for {pulsar-short} token authentication Use the following settings in your `values.yaml` file to enable automatic generation of the secrets and enable token-based authentication: @@ -17,7 +17,7 @@ autoRecovery: enableProvisionContainer: yes ---- -When `enableProvisionContainer` is enabled, Pulsar will check if the required secrets exist. If they don't exist, it will generate new token keys and use those keys to generate the default set of tokens. +When `enableProvisionContainer` is enabled, {pulsar-short} will check if the required secrets exist. If they don't exist, it will generate new token keys and use those keys to generate the default set of tokens. The name of the key secrets are: @@ -31,7 +31,7 @@ Using these keys will generate tokens for each role listed in `superUserRoles` i * `token-proxy` * `token-websocket` -== Manually generating secrets for Pulsar token authentication +== Manually generating secrets for {pulsar-short} token authentication include::ROOT:partial$manually-create-credentials.adoc[] @@ -49,7 +49,7 @@ Create the certificate: `kubectl create secret tls --key --cert ` -The resulting secret will be of type `kubernetes.io/tls`. The key should *not* be in `PKCS 8` format, even though that is the format used by Pulsar. The `kubernetes.io/tls` format will be converted by the chart to `PKCS 8`. +The resulting secret will be of type `kubernetes.io/tls`. The key should *not* be in `PKCS 8` format, even though that is the format used by {pulsar-short}. The `kubernetes.io/tls` format will be converted by the chart to `PKCS 8`. If you have a self-signed certificate, manually specify the certificate information directly in https://github.com/datastax/pulsar-helm-chart/blob/master/examples/dev-values-keycloak-auth.yaml[values]: @@ -66,11 +66,11 @@ Once you have created the secrets that store the certificate info (or manually s == Token Authentication via Keycloak Integration -DataStax created the https://github.com/datastax/pulsar-openid-connect-plugin[Pulsar OpenID Connect Authentication Plugin] to provide a more dynamic authentication option for Pulsar. This plugin integrates with any OpenID Connect-compliant identity provider to dynamically retrieve public keys for token validation. This dynamic public key retrieval enables support for key rotation and multiple authentication/identity providers by configuring multiple allowed token issuers. It also means that token secret keys will *not* be stored in Kubernetes secrets. +{company} created the https://github.com/datastax/pulsar-openid-connect-plugin[{pulsar-short} OpenID Connect Authentication Plugin] to provide a more dynamic authentication option for {pulsar-short}. This plugin integrates with any OpenID Connect-compliant identity provider to dynamically retrieve public keys for token validation. This dynamic public key retrieval enables support for key rotation and multiple authentication/identity providers by configuring multiple allowed token issuers. It also means that token secret keys will *not* be stored in Kubernetes secrets. -In order to simplify deployment for Pulsar cluster components, the plugin provides the option to use Keycloak in conjunction with Pulsar's basic token based authentication. For more, see https://github.com/datastax/pulsar-openid-connect-plugin[Pulsar OpenID Connect Authentication Plugin]. +In order to simplify deployment for {pulsar-short} cluster components, the plugin provides the option to use Keycloak in conjunction with {pulsar-short}'s basic token based authentication. For more, see https://github.com/datastax/pulsar-openid-connect-plugin[{pulsar-short} OpenID Connect Authentication Plugin]. -See the example https://github.com/datastax/pulsar-helm-chart/blob/master/examples/dev-values-keycloak-auth.yaml[Keycloak Helm chart] for deploying a working cluster that integrates with Keycloak. By default, the Helm chart creates a Pulsar realm within Keycloak and sets up the client used by the Pulsar Admin Console as well as a sample client and some sample groups. The configuration for the broker side auth plugin should be placed in the `.Values..configData` maps. +See the example https://github.com/datastax/pulsar-helm-chart/blob/master/examples/dev-values-keycloak-auth.yaml[Keycloak Helm chart] for deploying a working cluster that integrates with Keycloak. By default, the Helm chart creates a {pulsar-short} realm within Keycloak and sets up the client used by the {pulsar-short} Admin Console as well as a sample client and some sample groups. The configuration for the broker side auth plugin should be placed in the `.Values..configData` maps. === Configuring Keycloak for Token Generation @@ -97,35 +97,50 @@ keycloak: adminPassword: "F3LVqnxqMmkCQkvyPdJiwXodqQncK@" ---- -. Navigate to `localhost:8080` in a browser and view the Pulsar realm in the Keycloak UI. Note that the realm name must match the configured realm name (`.Values.keycloak.realm`) for the OpenID Connect plugin to work properly. +. Navigate to `localhost:8080` in a browser and view the {pulsar-short} realm in the Keycloak UI. Note that the realm name must match the configured realm name (`.Values.keycloak.realm`) for the OpenID Connect plugin to work properly. -The OpenID Connect plugin uses the `sub` (subject) claim from the JWT as the role used for authorization within Pulsar. To get Keycloak to generate the JWT for a client with the right `sub`, create a special "mapper" that is a "Hardcoded claim" mapping claim name sub to a claim value that is the desired role, like `superuser`. The default config installed by https://github.com/datastax/pulsar-helm-chart/blob/master/examples/dev-values-keycloak-auth.yaml[this helm chart] provides examples of how to add custom mapper protocols to clients. +The OpenID Connect plugin uses the `sub` (subject) claim from the JWT as the role used for authorization within {pulsar-short}. To get Keycloak to generate the JWT for a client with the right `sub`, create a special "mapper" that is a "Hardcoded claim" mapping claim name sub to a claim value that is the desired role, like `superuser`. The default config installed by https://github.com/datastax/pulsar-helm-chart/blob/master/examples/dev-values-keycloak-auth.yaml[this Helm chart] provides examples of how to add custom mapper protocols to clients. -=== Retrieving and using a token from Keycloak with Pulsar Admin CLI +=== Retrieving and using a token from Keycloak with {pulsar-short} Admin CLI -. After creating your realm and client, retrieve a token with the Pulsar Admin CLI. To generate a token that will have an allowed issuer, you should exec into a bastion pod in the k8s cluster. Exec'ing into a bastion host will give you immediate access to a `pulsar-admin` cli tool that you can use to verify that you have access. +. After creating your realm and client, retrieve a token with the {pulsar-short} Admin CLI. To generate a token that will have an allowed issuer, you should exec into a bastion pod in the k8s cluster. Exec'ing into a bastion host will give you immediate access to a `pulsar-admin` cli tool that you can use to verify that you have access. + +[source,shell] ---- kubectl -n default exec $(kubectl get pods --namespace default -l "app=pulsar,component=bastion" -o jsonpath="{.items[0].metadata.name}") -it -- bash ---- -. Run the following from a bastion pod to generate an allowed issuer token. +. Run the following from a bastion pod to generate an allowed issuer token: + +[source,shell] ---- pulsar@pulsar-bastion-85c9b777f6-gt9ct:/pulsar$ curl -d "client_id=test-client" \ -d "client_secret=19d9f4a2-65fb-4695-873c-d0c1d6bdadad" \ -d "grant_type=client_credentials" \ "http://test-keycloak/auth/realms/pulsar/protocol/openid-connect/token" -{"access_token":"eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJDY3c3ZXcwQ0hKMThfbWpCQzYxb2xOSU1wT0d3TkEyd1ZFbHBZLUdzb2tvIn0.eyJleHAiOjE2MjY5NzUwNzIsImlhdCI6MTYyNjk3NDQ3MiwianRpIjoiYTExZmFkY2YtYTJkZi00NmNkLTk0OWEtNDdkNzdmNDYxMDMxIiwiaXNzIjoiaHR0cDovL3Rlc3Qta2V5Y2xvYWsvYXV0aC9yZWFsbXMvcHVsc2FyIiwiYXVkIjoiYWNjb3VudCIsInN1YiI6ImQwN2UxOGIxLTE4YzQtNDZhMC1hNGU0LWE3YTZjNmRiMjFkYyIsInR5cCI6IkJlYXJlciIsImF6cCI6InRlc3QtY2xpZW50IiwiYWNyIjoiMSIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsImRlZmF1bHQtcm9sZXMtcHVsc2FyIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUiLCJzdWIiOiJzdXBlcnVzZXIiLCJjbGllbnRIb3N0IjoiMTcyLjE3LjAuMSIsImNsaWVudElkIjoidGVzdC1jbGllbnQiLCJlbWFpbF92ZXJpZmllZCI6ZmFsc2UsInByZWZlcnJlZF91c2VybmFtZSI6InNlcnZpY2UtYWNjb3VudC10ZXN0LWNsaWVudCIsImNsaWVudEFkZHJlc3MiOiIxNzIuMTcuMC4xIn0.FckQLOD64ZTKmx2uutP75QBpZAqHaqWyEE6jRUXvbSzsiXTAQyz-30zKsUSEjOMJp97NlTy3NZECVo_GdZ7oPcneFdglmFY62btWj-5s6ELcazj-AGQhyt0muGD4VP71xjpjCUpVxhyBIQlltGZLu7Rgw4trfh3LS8YjaY74vGg_BjOzZ8VI4S352lyGOULou7_dRbaeKhv43OfU7e_Y_ro_m_9UaDARypcj3uqSllhZdifA4YbHyaBCCu5eH19GCLtFm3I00PvWkOy3iTyOkkTcayqJ-Vlraf95qCZFN-sooIIU6o8L-wS-Zr7EvkoDJ-II9q49WHJJLIIvnCE2ug","expires_in":600,"refresh_expires_in":0,"token_type":"Bearer","not-before-policy":0,"scope":"email profile"} ---- ++ +.Results +[%collapsible] +==== +[source,json] +---- +{ + "access_token":"eyJhbGc...TRUNCATED...nCE2ug", + "expires_in":600, + "refresh_expires_in":0, + "token_type":"Bearer", + "not-before-policy":0, + "scope":"email profile" +} +---- +==== -. Copy the `access_token` contents and use it here: +. Copy the `access_token` contents and use it in the `pulsar-admin` command's `--auth-params` option: + +[source,shell] ---- -pulsar@pulsar-bastion-85c9b777f6-gt9ct:/pulsar$ bin/pulsar-admin --auth-params "token:eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6ICJDY3c3ZXcwQ0hKMThfbWpCQzYxb2xOSU1wT0d3TkEyd1ZFbHBZLUdzb2tvIn0.eyJleHAiOjE2MjY5NzUwNzIsImlhdCI6MTYyNjk3NDQ3MiwianRpIjoiYTExZmFkY2YtYTJkZi00NmNkLTk0OWEtNDdkNzdmNDYxMDMxIiwiaXNzIjoiaHR0cDovL3Rlc3Qta2V5Y2xvYWsvYXV0aC9yZWFsbXMvcHVsc2FyIiwiYXVkIjoiYWNjb3VudCIsInN1YiI6ImQwN2UxOGIxLTE4YzQtNDZhMC1hNGU0LWE3YTZjNmRiMjFkYyIsInR5cCI6IkJlYXJlciIsImF6cCI6InRlc3QtY2xpZW50IiwiYWNyIjoiMSIsInJlYWxtX2FjY2VzcyI6eyJyb2xlcyI6WyJvZmZsaW5lX2FjY2VzcyIsImRlZmF1bHQtcm9sZXMtcHVsc2FyIiwidW1hX2F1dGhvcml6YXRpb24iXX0sInJlc291cmNlX2FjY2VzcyI6eyJhY2NvdW50Ijp7InJvbGVzIjpbIm1hbmFnZS1hY2NvdW50IiwibWFuYWdlLWFjY291bnQtbGlua3MiLCJ2aWV3LXByb2ZpbGUiXX19LCJzY29wZSI6ImVtYWlsIHByb2ZpbGUiLCJzdWIiOiJzdXBlcnVzZXIiLCJjbGllbnRIb3N0IjoiMTcyLjE3LjAuMSIsImNsaWVudElkIjoidGVzdC1jbGllbnQiLCJlbWFpbF92ZXJpZmllZCI6ZmFsc2UsInByZWZlcnJlZF91c2VybmFtZSI6InNlcnZpY2UtYWNjb3VudC10ZXN0LWNsaWVudCIsImNsaWVudEFkZHJlc3MiOiIxNzIuMTcuMC4xIn0.FckQLOD64ZTKmx2uutP75QBpZAqHaqWyEE6jRUXvbSzsiXTAQyz-30zKsUSEjOMJp97NlTy3NZECVo_GdZ7oPcneFdglmFY62btWj-5s6ELcazj-AGQhyt0muGD4VP71xjpjCUpVxhyBIQlltGZLu7Rgw4trfh3LS8YjaY74vGg_BjOzZ8VI4S352lyGOULou7_dRbaeKhv43OfU7e_Y_ro_m_9UaDARypcj3uqSllhZdifA4YbHyaBCCu5eH19GCLtFm3I00PvWkOy3iTyOkkTcayqJ-Vlraf95qCZFN-sooIIU6o8L-wS-Zr7EvkoDJ-II9q49WHJJLIIvnCE2ug" \ - tenants list -"public" -"pulsar" +pulsar@pulsar-bastion-85c9b777f6-gt9ct:/pulsar$ bin/pulsar-admin --auth-params "token:eyJhb...TRUNCATED...E2ug" --tenants list ---- You're now using Keycloak tokens with `pulsar-admin` CLI. @@ -137,6 +152,7 @@ An alternative method for retrieving and using a Keycloak token from the bastion . Retrieve the client credentials from Keycloak as above. . Create a `creds.json` file and enter your retrieved credentials in this format: + +[source,json] ---- { "client_id": "pulsar-admin-example-client", @@ -147,17 +163,16 @@ An alternative method for retrieving and using a Keycloak token from the bastion . In the bastion pod, issue the command to use the Keycloak token: + +[source,shell] ---- pulsar@pulsar-broker-79b87f786d-tjvm7:/pulsar$ bin/pulsar-admin \ ---auth-plugin "org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2" ---auth-params '{"privateKey":"file:///pulsar/creds.json","issuerUrl":"http://test-keycloak:8081/auth/realms/pulsar","audience":"I dont matter"}' ---tenants list -public -pulsar +--auth-plugin "org.apache.pulsar.client.impl.auth.oauth2.AuthenticationOAuth2" +--auth-params '{"privateKey":"file:///pulsar/creds.json","issuerUrl":"http://test-keycloak:8081/auth/realms/pulsar","audience":"not used"}' +--tenants list ---- You're now using Keycloak tokens with `pulsar-admin` CLI. == Next steps -To connect with the Pulsar Admin console and start sending and consuming messages, see xref:components:admin-console-tutorial.adoc[Admin Console]. \ No newline at end of file +To connect with the {pulsar-short} Admin console and start sending and consuming messages, see xref:components:admin-console-tutorial.adoc[Admin Console]. \ No newline at end of file diff --git a/modules/operations/pages/functions.adoc b/modules/operations/pages/functions.adoc index 9706f86c..8433c082 100644 --- a/modules/operations/pages/functions.adoc +++ b/modules/operations/pages/functions.adoc @@ -2,11 +2,11 @@ Functions are lightweight compute processes that enable you to process each message received on a topic or multiple topics. You can apply custom logic to that message, transforming or enriching it, and then output it to a different topic. -Functions run inside Luna Streaming and are therefore serverless. Write the code for your function in Java, Python, or Go, then upload the code to the Pulsar cluster and deploy the function. The function will be automatically run for each message published to the specified input topic. See https://pulsar.apache.org/docs/en/functions-overview/[Pulsar Functions overview] for more information about Apache Pulsar(R) functions. +Functions run inside Luna Streaming and are therefore serverless. Write the code for your function in Java, Python, or Go, then upload the code to the {pulsar-short} cluster and deploy the function. The function will be automatically run for each message published to the specified input topic. See https://pulsar.apache.org/docs/en/functions-overview/[{pulsar-short} Functions overview] for more information about {pulsar-reg} functions. -== Manage functions using Pulsar Admin CLI +== Manage functions using {pulsar-short} Admin CLI -Add functions using the Pulsar Admin CLI. Create a new Python function to consume a message from one topic, add an exclamation point, and publish the results to another topic. +Add functions using the {pulsar-short} Admin CLI. Create a new Python function to consume a message from one topic, add an exclamation point, and publish the results to another topic. . Create the following Python function in `function.py`: + @@ -22,7 +22,7 @@ class ExclamationFunction(Function): return input + '!' ---- + -. Deploy `function.py` to your Pulsar cluster using the Pulsar Admin CLI: +. Deploy `function.py` to your {pulsar-short} cluster using the {pulsar-short} Admin CLI: + [source,bash] ---- @@ -44,7 +44,7 @@ If the function is set up and ready to accept messages, you should see "Created Triggering a function is a convenient way to test that the function is working. When you trigger a function, you are publishing a message on the function’s input topic, which triggers the function to run. -To test a function with the Pulsar CLI, send a test value with Pulsar CLI's `trigger`. +To test a function with the {pulsar-short} CLI, send a test value with {pulsar-short} CLI's `trigger`. . Listen for messages on the output topic: + @@ -68,9 +68,9 @@ $ ./pulsar-admin functions trigger \ + The trigger sends the string `Hello world` to your exclamation function. Your function should output `Hello world!` to your consumed output. -== Add Functions using Pulsar Admin Console +== Add Functions using {pulsar-short} Admin Console -If the Pulsar Admin Console is deployed, you can also add and manage the Pulsar functions in the *Functions* tab of the Admin Console web UI. +If the {pulsar-short} Admin Console is deployed, you can also add and manage the {pulsar-short} functions in the *Functions* tab of the Admin Console web UI. . Select *Choose File* to choose a local Function. In this example, we chose `exclamation_function.py`. Choose the file you want to pull the function from and which function you want to use within that file. + @@ -91,7 +91,7 @@ Your input topics, output topics, log topics, and processing guarantees will aut . Provide a *Configuration Key* in the dropdown menu. + -For a list of configuration keys, see the https://pulsar.apache.org/functions-rest-api/#operation/registerFunction[Pulsar Functions API Docs]. +For a list of configuration keys, see the https://pulsar.apache.org/functions-rest-api/#operation/registerFunction[{pulsar-short} Functions API Docs]. . Select *Add* to add your function. @@ -131,7 +131,7 @@ A *Function-name Deleted Successfully!* flag will appear to let you know you've === Trigger your function -To trigger a function in the Pulsar Admin Console, select *Trigger* in the *Manage* dashboard. +To trigger a function in the {pulsar-short} Admin Console, select *Trigger* in the *Manage* dashboard. image::admin-console-trigger-function.png[Trigger Function] @@ -139,4 +139,4 @@ Enter your message in the *Message to Send* field, and select the output topic. == Next steps -For more about developing functions for Luna Streaming and Pulsar, see https://pulsar.apache.org/docs/en/functions-develop/[here]. \ No newline at end of file +For more about developing functions for Luna Streaming and {pulsar-short}, see https://pulsar.apache.org/docs/en/functions-develop/[here]. \ No newline at end of file diff --git a/modules/operations/pages/io-connectors.adoc b/modules/operations/pages/io-connectors.adoc index f93a1cf3..28c5d142 100644 --- a/modules/operations/pages/io-connectors.adoc +++ b/modules/operations/pages/io-connectors.adoc @@ -2,31 +2,31 @@ When you have Luna Streaming xref:install-upgrade:quickstart-server-installs.adoc[installed] and running, add IO connectors to connect your deployment to external systems like https://cassandra.apache.org/_/index.html[Apache Cassandra], https://www.elastic.co/[ElasticSearch], and more. -* xref:io-connectors.adoc#sink-connectors[Source connectors]: Source connectors read messages from external topics and persist the messages to Apache Pulsar(TM) topics. For more, see https://pulsar.apache.org/docs/en/io-connectors/#source-connector[Pulsar built-in connectors]. +* xref:io-connectors.adoc#sink-connectors[Source connectors]: Source connectors read messages from external topics and persist the messages to {pulsar-reg} topics. For more, see https://pulsar.apache.org/docs/en/io-connectors/#source-connector[{pulsar-short} built-in connectors]. -* xref:io-connectors.adoc#source-connectors[Sink connectors]: Sink connectors read messages from Pulsar topics and persist the messages to external systems. For more, see https://pulsar.apache.org/docs/en/io-connectors/#sink-connector[Pulsar built-in connectors]. +* xref:io-connectors.adoc#source-connectors[Sink connectors]: Sink connectors read messages from {pulsar-short} topics and persist the messages to external systems. For more, see https://pulsar.apache.org/docs/en/io-connectors/#sink-connector[{pulsar-short} built-in connectors]. This doc lists the connectors supported by *Luna Streaming*. [#sink-connectors] == Sink Connectors -*Sink connectors* read messages from Pulsar topics and persist the messages to external systems. +*Sink connectors* read messages from {pulsar-short} topics and persist the messages to external systems. -The following sink connectors are included in the `` deployment and are supported by DataStax Luna Streaming. +The following sink connectors are included in the `` deployment and are supported by {company} Luna Streaming. [#datastax-pulsar-sink] -=== DataStax enhanced Cassandra sink connector +=== {company} enhanced Cassandra sink connector -To configure, deploy, and use the DataStax enhanced Cassandra sink connector, see the xref:pulsar-connector:ROOT:index.adoc[DataStax Apache Pulsar Connector documentation]. +To configure, deploy, and use the {company} enhanced Cassandra sink connector, see the xref:pulsar-connector:ROOT:index.adoc[{company} {pulsar} Connector documentation]. -The DataStax enhanced Cassandra sink connector offers the following advantages over the OSS Pulsar Cassandra sink connector: +The {company} enhanced Cassandra sink connector offers the following advantages over the OSS {pulsar-short} Cassandra sink connector: -* Flexibility in mapping Apache Pulsar(TM) messages to DSE and Cassandra tables. +* Flexibility in mapping {pulsar-reg} messages to DSE and Cassandra tables. * Enterprise grade security support including built-in SSL, and LDAP integration. -* Consumes all Apache Pulsar(TM) primitives including primitives, JSON and Avro formats. +* Consumes all {pulsar-reg} primitives including primitives, JSON and Avro formats. * Flexible time/date formatting. @@ -46,60 +46,60 @@ To configure, deploy, and use the ElasticSearch sink connector, see the xref:io- [#jdbc-clickhouse-sink] === JDBC-Clickhouse sink -To configure, deploy, and use the JDBC-Clickhouse sink connector, see the https://pulsar.apache.org/docs/next/io-jdbc-sink/[Pulsar documentation]. +To configure, deploy, and use the JDBC-Clickhouse sink connector, see the https://pulsar.apache.org/docs/next/io-jdbc-sink/[{pulsar-short} documentation]. [#jdbc-mariadb-sink] === JDBC-MariaDB sink -To configure, deploy, and use the JDBC-MariaDB sink connector, see the https://pulsar.apache.org/docs/next/io-jdbc-sink#example-for-mariadb[Pulsar documentation]. +To configure, deploy, and use the JDBC-MariaDB sink connector, see the https://pulsar.apache.org/docs/next/io-jdbc-sink#example-for-mariadb[{pulsar-short} documentation]. [#jdbc-postgres-sink] === JDBC-PostgreSQL sink -To configure, deploy, and use the JDBC-PostgreSQL connector, see the https://pulsar.apache.org/docs/next/io-jdbc-sink#example-for-postgresql[Pulsar documentation]. +To configure, deploy, and use the JDBC-PostgreSQL connector, see the https://pulsar.apache.org/docs/next/io-jdbc-sink#example-for-postgresql[{pulsar-short} documentation]. [#kafka-sink] === Kafka sink -To configure, deploy, and use the Kafka sink connector, see the https://pulsar.apache.org/docs/next/io-kafka-sink#configuration[Pulsar documentation]. +To configure, deploy, and use the Kafka sink connector, see the https://pulsar.apache.org/docs/next/io-kafka-sink#configuration[{pulsar-short} documentation]. [#kinesis-sink] === Kinesis sink -To configure, deploy, and use the Kinesis sink connector, see the https://pulsar.apache.org/docs/next/io-kinesis-sink#configuration[Pulsar documentation]. +To configure, deploy, and use the Kinesis sink connector, see the https://pulsar.apache.org/docs/next/io-kinesis-sink#configuration[{pulsar-short} documentation]. [#source-connectors] == Source Connectors -*Source connectors* read messages from external topics and persist the messages to Pulsar topics. +*Source connectors* read messages from external topics and persist the messages to {pulsar-short} topics. -The following sink connectors are included in the `` deployment and are supported by DataStax Luna Streaming. +The following sink connectors are included in the `` deployment and are supported by {company} Luna Streaming. [#debezium-mongodb-source] === Debezium MongoDB source -To configure, deploy, and use the Debezium MongoDB source connector, see the https://pulsar.apache.org/docs/next/io-debezium-source#mongodb-configuration[Pulsar documentation]. +To configure, deploy, and use the Debezium MongoDB source connector, see the https://pulsar.apache.org/docs/next/io-debezium-source#mongodb-configuration[{pulsar-short} documentation]. [#debezium-mysql-source] === Debezium MySQL source -To configure, deploy, and use the Debezium MySQL source connector, see the https://pulsar.apache.org/docs/next/io-debezium-source#configuration-1[Pulsar documentation]. +To configure, deploy, and use the Debezium MySQL source connector, see the https://pulsar.apache.org/docs/next/io-debezium-source#configuration-1[{pulsar-short} documentation]. [#debezium-postgres-source] === Debezium Postgres source -To configure, deploy, and use the Debezium PostgreSQL source connector, see the https://pulsar.apache.org/docs/next/io-debezium-source#configuration-2[Pulsar documentation]. +To configure, deploy, and use the Debezium PostgreSQL source connector, see the https://pulsar.apache.org/docs/next/io-debezium-source#configuration-2[{pulsar-short} documentation]. [#kafka-source] === Kafka source -To configure, deploy, and use the Kafka source connector, see the https://pulsar.apache.org/docs/next/io-kafka-source#configuration[Pulsar documentation]. +To configure, deploy, and use the Kafka source connector, see the https://pulsar.apache.org/docs/next/io-kafka-source#configuration[{pulsar-short} documentation]. [#kinesis-source] === Kinesis source -To configure, deploy, and use the Kinesis source connector, see the https://pulsar.apache.org/docs/next/io-kinesis-source#configuration[Pulsar documentation]. +To configure, deploy, and use the Kinesis source connector, see the https://pulsar.apache.org/docs/next/io-kinesis-source#configuration[{pulsar-short} documentation]. == Next steps -For more on Pulsar IO connectors, see the https://pulsar.apache.org/docs/en/io-overview/[Pulsar documentation]. \ No newline at end of file +For more on {pulsar-short} IO connectors, see the https://pulsar.apache.org/docs/en/io-overview/[{pulsar-short} documentation]. \ No newline at end of file diff --git a/modules/operations/pages/io-elastic-sink.adoc b/modules/operations/pages/io-elastic-sink.adoc index fb87b6d7..eaf024ac 100644 --- a/modules/operations/pages/io-elastic-sink.adoc +++ b/modules/operations/pages/io-elastic-sink.adoc @@ -1,6 +1,6 @@ = Elasticsearch sink connector -The https://www.elastic.co/elasticsearch/[Elasticsearch] sink connector reads messages from Pulsar topics and persists messages to indexes. +The https://www.elastic.co/elasticsearch/[Elasticsearch] sink connector reads messages from {pulsar-short} topics and persists messages to indexes. * xref:io-elastic-sink.adoc#configuration[Configuration] * xref:io-elastic-sink.adoc#ssl-configuration[ElasticSearchSslConfig properties] @@ -50,7 +50,7 @@ The configuration of the Elasticsearch sink connector has the following properti | `socketTimeoutInMs` | Integer | false |60000 | The socket timeout in milliseconds waiting to read the elasticsearch response. | `ssl` | ElasticSearchSslConfig | false | string | Configuration for TLS encrypted communication. See xref:io-elastic-sink.adoc#ssl-configuration[]. | `stripNonPrintableCharacters` | Boolean| false | true| Whether to remove all non-printable characters from the document or not. If it is set to true, all non-printable characters are removed from the document. -| `stripNulls` | Boolean | false |true | If stripNulls is false, elasticsearch _source includes 'null' for empty fields (for example {"foo": null}), otherwise null fields are stripped. +| `stripNulls` | Boolean | false |true | If stripNulls is false, elasticsearch _source includes 'null' for empty fields (for example `{"foo": null}`), otherwise null fields are stripped. | `token` | String| false | " " (empty string)|The token used by the connector to connect to the ElasticSearch cluster. Only one between basic/token/apiKey authentication mode must be configured. | `typeName` | String | false | "_doc" | The type name to which the connector writes messages to. The value should be set explicitly to a valid type name other than "_doc" for Elasticsearch version before 6.2, and left to default otherwise. | `username` | String| false |" " (empty string)| The username used by the connector to connect to the elastic search cluster. If `username` is set, then `password` should also be provided. @@ -138,7 +138,7 @@ $ docker run -p 9200:9200 -p 9300:9300 \ docker.elastic.co/elasticsearch/elasticsearch:7.13.3 ---- -. Start a Pulsar service locally in standalone mode. +. Start a {pulsar-short} service locally in standalone mode. + [source,bash] ---- @@ -147,7 +147,7 @@ $ bin/pulsar standalone + . Make sure the connector NAR file is available at `connectors/pulsar-io-elastic-search-@pulsar:version@.nar`. + -. Start the Pulsar Elasticsearch connector in local run mode using the JSON or YAML configuration file. +. Start the {pulsar-short} Elasticsearch connector in local run mode using the JSON or YAML configuration file. + [tabs] ==== @@ -273,7 +273,7 @@ $ docker run -p 9200:9200 -p 9300:9300 \ docker.elastic.co/elasticsearch/elasticsearch:7.13.3 ---- -. Start a Pulsar service locally in standalone mode. +. Start a {pulsar-short} service locally in standalone mode. + [source,bash] ---- @@ -282,7 +282,7 @@ $ bin/pulsar standalone + . Make sure the connector NAR file is available at `connectors/pulsar-io-elastic-search-@pulsar:version@.nar`. + -. Start the Pulsar Elasticsearch connector in local run mode using the JSON or YAML configuration file. +. Start the {pulsar-short} Elasticsearch connector in local run mode using the JSON or YAML configuration file. + [tabs] ==== diff --git a/modules/operations/pages/scale-cluster.adoc b/modules/operations/pages/scale-cluster.adoc index 1b5fc4e8..1cd69532 100644 --- a/modules/operations/pages/scale-cluster.adoc +++ b/modules/operations/pages/scale-cluster.adoc @@ -2,9 +2,9 @@ This page will show you how to scale Luna Streaming clusters up for more compute capacity, or down for less. -== Installing Pulsar cluster +== Installing {pulsar-short} cluster -For our Pulsar cluster installation, use this https://github.com/datastax/pulsar-helm-chart[Helm chart]. +For our {pulsar-short} cluster installation, use this https://github.com/datastax/pulsar-helm-chart[Helm chart]. To start the cluster, use the values provided in this https://github.com/datastax/pulsar-helm-chart/blob/master/examples/dev-values.yaml[YAML file]. @@ -31,7 +31,7 @@ $ diff ~/dev-values.yaml ~/dev-values_large.yaml > defaultWriteQuorum: 3 ---- -. Create the cluster by installing Pulsar with `dev-values_large.yaml`: +. Create the cluster by installing {pulsar-short} with `dev-values_large.yaml`: + ---- $ helm install pulsar -f ~/dev-values_large.yaml --wait datastax-pulsar/pulsar @@ -84,7 +84,7 @@ To scale up your cluster, change the `replicaCount` value in the YAML file to a bookkeeper: replicaCount: 5 ---- -. Upgrade the Helm chart to use the new value in the Pulsar cluster: +. Upgrade the Helm chart to use the new value in the {pulsar-short} cluster: + ---- $ helm upgrade pulsar -f ~/dev-values_large.yaml --wait datastax-pulsar/pulsar diff --git a/modules/operations/pages/troubleshooting.adoc b/modules/operations/pages/troubleshooting.adoc index 8094750d..d068ba57 100644 --- a/modules/operations/pages/troubleshooting.adoc +++ b/modules/operations/pages/troubleshooting.adoc @@ -36,7 +36,7 @@ image::gcp-quota-example2.png[GCP Backend Quota] If your pods are stuck in a *Pending* state after installation or your cloud provider is warning you about *Unschedulable Pods*, there are a few ways to work through this: -* If some of your pods start, but others like `pulsar-adminconsole` and `pulsar-grafana` are left in an *Unschedulable* state, you might need to add CPUs to your existing nodes or an additional node pool. Luna Streaming requires more resources than Apache Pulsar. +* If some of your pods start, but others like `pulsar-adminconsole` and `pulsar-grafana` are left in an *Unschedulable* state, you might need to add CPUs to your existing nodes or an additional node pool. Luna Streaming requires more resources than {pulsar}. * To examine a specific pod, use `kubectl describe`. For example, if your `pulsar-bookkeeper-0` pod is not scheduling, use `kubectl describe pods/pulsar-bookkeeper-0` to view detailed output on the pod's state, dependencies, and events. @@ -78,7 +78,7 @@ If the shell finds no resources, you might not have any public namespaces. Creat === Publish a message -To test your Pulsar cluster with the bastion pod, produce a message with `pulsar-client` through the bastion pod shell: +To test your {pulsar-short} cluster with the bastion pod, produce a message with `pulsar-client` through the bastion pod shell: `pulsar-client produce my-test-topic --messages "hello-pulsar"` @@ -86,7 +86,7 @@ You should receive a confirmation the message was produced: `00:16:37.970 [main] INFO org.apache.pulsar.client.cli.PulsarClientTool - 1 messages successfully produced` -This means your Pulsar cluster is functional. If the message isn't produced, double-check your message syntax. +This means your {pulsar-short} cluster is functional. If the message isn't produced, double-check your message syntax. == Next steps